2025-04-13 00:00:09.448447 | Job console starting... 2025-04-13 00:00:09.459653 | Updating repositories 2025-04-13 00:00:09.674050 | Preparing job workspace 2025-04-13 00:00:11.517923 | Running Ansible setup... 2025-04-13 00:00:17.627661 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-04-13 00:00:18.545723 | 2025-04-13 00:00:18.545836 | PLAY [Base pre] 2025-04-13 00:00:18.580847 | 2025-04-13 00:00:18.580957 | TASK [Setup log path fact] 2025-04-13 00:00:18.622398 | orchestrator | ok 2025-04-13 00:00:18.640360 | 2025-04-13 00:00:18.640500 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-04-13 00:00:18.669582 | orchestrator | ok 2025-04-13 00:00:18.694260 | 2025-04-13 00:00:18.694617 | TASK [emit-job-header : Print job information] 2025-04-13 00:00:18.766268 | # Job Information 2025-04-13 00:00:18.766464 | Ansible Version: 2.15.3 2025-04-13 00:00:18.766496 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-04-13 00:00:18.766521 | Pipeline: periodic-midnight 2025-04-13 00:00:18.766538 | Executor: 7d211f194f6a 2025-04-13 00:00:18.766554 | Triggered by: https://github.com/osism/testbed 2025-04-13 00:00:18.766569 | Event ID: 8a57b98e6b224f639de66ef7ec48cb0f 2025-04-13 00:00:18.778263 | 2025-04-13 00:00:18.778373 | LOOP [emit-job-header : Print node information] 2025-04-13 00:00:19.022905 | orchestrator | ok: 2025-04-13 00:00:19.023065 | orchestrator | # Node Information 2025-04-13 00:00:19.023098 | orchestrator | Inventory Hostname: orchestrator 2025-04-13 00:00:19.023594 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-04-13 00:00:19.023633 | orchestrator | Username: zuul-testbed03 2025-04-13 00:00:19.023656 | orchestrator | Distro: Debian 12.10 2025-04-13 00:00:19.023682 | orchestrator | Provider: static-testbed 2025-04-13 00:00:19.023732 | orchestrator | Label: testbed-orchestrator 2025-04-13 00:00:19.023753 | orchestrator | Product Name: OpenStack Nova 2025-04-13 00:00:19.023773 | orchestrator | Interface IP: 81.163.193.140 2025-04-13 00:00:19.114927 | 2025-04-13 00:00:19.115865 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-04-13 00:00:20.334953 | orchestrator -> localhost | changed 2025-04-13 00:00:20.343818 | 2025-04-13 00:00:20.343912 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-04-13 00:00:22.463248 | orchestrator -> localhost | changed 2025-04-13 00:00:22.489487 | 2025-04-13 00:00:22.489593 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-04-13 00:00:23.164946 | orchestrator -> localhost | ok 2025-04-13 00:00:23.171902 | 2025-04-13 00:00:23.171990 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-04-13 00:00:23.221103 | orchestrator | ok 2025-04-13 00:00:23.240215 | orchestrator | included: /var/lib/zuul/builds/6f5299302c9b4aa99b7dc55ec68fd24a/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-04-13 00:00:23.248278 | 2025-04-13 00:00:23.248357 | TASK [add-build-sshkey : Create Temp SSH key] 2025-04-13 00:00:24.619156 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-04-13 00:00:24.619309 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/6f5299302c9b4aa99b7dc55ec68fd24a/work/6f5299302c9b4aa99b7dc55ec68fd24a_id_rsa 2025-04-13 00:00:24.619338 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/6f5299302c9b4aa99b7dc55ec68fd24a/work/6f5299302c9b4aa99b7dc55ec68fd24a_id_rsa.pub 2025-04-13 00:00:24.619359 | orchestrator -> localhost | The key fingerprint is: 2025-04-13 00:00:24.619378 | orchestrator -> localhost | SHA256:70bAmreWlF9GdHFj9Z91+MN1I4m7VWITzWYm0Sl7n8I zuul-build-sshkey 2025-04-13 00:00:24.619395 | orchestrator -> localhost | The key's randomart image is: 2025-04-13 00:00:24.619413 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-04-13 00:00:24.619441 | orchestrator -> localhost | | o*+=| 2025-04-13 00:00:24.619458 | orchestrator -> localhost | | oo=@o| 2025-04-13 00:00:24.619482 | orchestrator -> localhost | | . o BOo*| 2025-04-13 00:00:24.619499 | orchestrator -> localhost | | o +.*oO| 2025-04-13 00:00:24.619515 | orchestrator -> localhost | | oSo o...+=| 2025-04-13 00:00:24.619531 | orchestrator -> localhost | | o +.. =E .o| 2025-04-13 00:00:24.619553 | orchestrator -> localhost | | o =.+ . | 2025-04-13 00:00:24.619570 | orchestrator -> localhost | | +.o | 2025-04-13 00:00:24.619586 | orchestrator -> localhost | | . .. | 2025-04-13 00:00:24.619602 | orchestrator -> localhost | +----[SHA256]-----+ 2025-04-13 00:00:24.619643 | orchestrator -> localhost | ok: Runtime: 0:00:00.144656 2025-04-13 00:00:24.638729 | 2025-04-13 00:00:24.638837 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-04-13 00:00:24.705242 | orchestrator | ok 2025-04-13 00:00:24.735968 | orchestrator | included: /var/lib/zuul/builds/6f5299302c9b4aa99b7dc55ec68fd24a/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-04-13 00:00:24.775999 | 2025-04-13 00:00:24.776101 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-04-13 00:00:24.827323 | orchestrator | skipping: Conditional result was False 2025-04-13 00:00:24.834204 | 2025-04-13 00:00:24.834288 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-04-13 00:00:25.460096 | orchestrator | changed 2025-04-13 00:00:25.480916 | 2025-04-13 00:00:25.481019 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-04-13 00:00:25.774925 | orchestrator | ok 2025-04-13 00:00:25.789665 | 2025-04-13 00:00:25.789771 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-04-13 00:00:26.305361 | orchestrator | ok 2025-04-13 00:00:26.317192 | 2025-04-13 00:00:26.317277 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-04-13 00:00:26.752790 | orchestrator | ok 2025-04-13 00:00:26.760297 | 2025-04-13 00:00:26.760417 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-04-13 00:00:26.784133 | orchestrator | skipping: Conditional result was False 2025-04-13 00:00:26.791269 | 2025-04-13 00:00:26.791356 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-04-13 00:00:27.325969 | orchestrator -> localhost | changed 2025-04-13 00:00:27.340501 | 2025-04-13 00:00:27.340611 | TASK [add-build-sshkey : Add back temp key] 2025-04-13 00:00:27.810614 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/6f5299302c9b4aa99b7dc55ec68fd24a/work/6f5299302c9b4aa99b7dc55ec68fd24a_id_rsa (zuul-build-sshkey) 2025-04-13 00:00:27.810820 | orchestrator -> localhost | ok: Runtime: 0:00:00.023005 2025-04-13 00:00:27.819510 | 2025-04-13 00:00:27.819613 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-04-13 00:00:28.243426 | orchestrator | ok 2025-04-13 00:00:28.261601 | 2025-04-13 00:00:28.261714 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-04-13 00:00:28.285019 | orchestrator | skipping: Conditional result was False 2025-04-13 00:00:28.297249 | 2025-04-13 00:00:28.297348 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-04-13 00:00:28.702363 | orchestrator | ok 2025-04-13 00:00:28.725920 | 2025-04-13 00:00:28.726018 | TASK [validate-host : Define zuul_info_dir fact] 2025-04-13 00:00:28.782003 | orchestrator | ok 2025-04-13 00:00:28.791106 | 2025-04-13 00:00:28.791197 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-04-13 00:00:29.417517 | orchestrator -> localhost | ok 2025-04-13 00:00:29.424854 | 2025-04-13 00:00:29.424951 | TASK [validate-host : Collect information about the host] 2025-04-13 00:00:30.746955 | orchestrator | ok 2025-04-13 00:00:30.780249 | 2025-04-13 00:00:30.780355 | TASK [validate-host : Sanitize hostname] 2025-04-13 00:00:30.887523 | orchestrator | ok 2025-04-13 00:00:30.901428 | 2025-04-13 00:00:30.901538 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-04-13 00:00:31.672417 | orchestrator -> localhost | changed 2025-04-13 00:00:31.678925 | 2025-04-13 00:00:31.679016 | TASK [validate-host : Collect information about zuul worker] 2025-04-13 00:00:32.160143 | orchestrator | ok 2025-04-13 00:00:32.173083 | 2025-04-13 00:00:32.173187 | TASK [validate-host : Write out all zuul information for each host] 2025-04-13 00:00:32.874251 | orchestrator -> localhost | changed 2025-04-13 00:00:32.885776 | 2025-04-13 00:00:32.885870 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-04-13 00:00:33.170778 | orchestrator | ok 2025-04-13 00:00:33.183776 | 2025-04-13 00:00:33.183869 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-04-13 00:00:53.619855 | orchestrator | changed: 2025-04-13 00:00:53.620032 | orchestrator | .d..t...... src/ 2025-04-13 00:00:53.620069 | orchestrator | .d..t...... src/github.com/ 2025-04-13 00:00:53.620094 | orchestrator | .d..t...... src/github.com/osism/ 2025-04-13 00:00:53.620115 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-04-13 00:00:53.620135 | orchestrator | RedHat.yml 2025-04-13 00:00:53.640354 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-04-13 00:00:53.640371 | orchestrator | RedHat.yml 2025-04-13 00:00:53.640440 | orchestrator | = 2.2.0"... 2025-04-13 00:01:06.410773 | orchestrator | 00:01:06.410 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-04-13 00:01:06.479287 | orchestrator | 00:01:06.478 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-04-13 00:01:07.741092 | orchestrator | 00:01:07.740 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-04-13 00:01:08.530642 | orchestrator | 00:01:08.530 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-04-13 00:01:09.746797 | orchestrator | 00:01:09.746 STDOUT terraform: - Installing hashicorp/null v3.2.3... 2025-04-13 00:01:10.684797 | orchestrator | 00:01:10.684 STDOUT terraform: - Installed hashicorp/null v3.2.3 (signed, key ID 0C0AF313E5FD9F80) 2025-04-13 00:01:11.879233 | orchestrator | 00:01:11.879 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-04-13 00:01:14.080564 | orchestrator | 00:01:14.080 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-04-13 00:01:14.080627 | orchestrator | 00:01:14.080 STDOUT terraform: Providers are signed by their developers. 2025-04-13 00:01:14.080634 | orchestrator | 00:01:14.080 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-04-13 00:01:14.080641 | orchestrator | 00:01:14.080 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-04-13 00:01:14.080647 | orchestrator | 00:01:14.080 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-04-13 00:01:14.080652 | orchestrator | 00:01:14.080 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-04-13 00:01:14.080657 | orchestrator | 00:01:14.080 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-04-13 00:01:14.080662 | orchestrator | 00:01:14.080 STDOUT terraform: you run "tofu init" in the future. 2025-04-13 00:01:14.080669 | orchestrator | 00:01:14.080 STDOUT terraform: OpenTofu has been successfully initialized! 2025-04-13 00:01:14.080723 | orchestrator | 00:01:14.080 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-04-13 00:01:14.080731 | orchestrator | 00:01:14.080 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-04-13 00:01:14.080736 | orchestrator | 00:01:14.080 STDOUT terraform: should now work. 2025-04-13 00:01:14.080747 | orchestrator | 00:01:14.080 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-04-13 00:01:14.080765 | orchestrator | 00:01:14.080 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-04-13 00:01:14.080815 | orchestrator | 00:01:14.080 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-04-13 00:01:15.140880 | orchestrator | 00:01:15.140 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-04-13 00:01:15.340628 | orchestrator | 00:01:15.340 STDOUT terraform: Created and switched to workspace "ci"! 2025-04-13 00:01:15.340782 | orchestrator | 00:01:15.340 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-04-13 00:01:15.340917 | orchestrator | 00:01:15.340 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-04-13 00:01:15.340960 | orchestrator | 00:01:15.340 STDOUT terraform: for this configuration. 2025-04-13 00:01:15.567733 | orchestrator | 00:01:15.567 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-04-13 00:01:15.687768 | orchestrator | 00:01:15.687 STDOUT terraform: ci.auto.tfvars 2025-04-13 00:01:16.321550 | orchestrator | 00:01:16.321 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-04-13 00:01:17.757522 | orchestrator | 00:01:17.757 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-04-13 00:01:18.273375 | orchestrator | 00:01:18.272 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-04-13 00:01:18.505738 | orchestrator | 00:01:18.505 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-04-13 00:01:18.505816 | orchestrator | 00:01:18.505 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-04-13 00:01:18.505854 | orchestrator | 00:01:18.505 STDOUT terraform:  + create 2025-04-13 00:01:18.505918 | orchestrator | 00:01:18.505 STDOUT terraform:  <= read (data resources) 2025-04-13 00:01:18.505992 | orchestrator | 00:01:18.505 STDOUT terraform: OpenTofu will perform the following actions: 2025-04-13 00:01:18.506203 | orchestrator | 00:01:18.506 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-04-13 00:01:18.506286 | orchestrator | 00:01:18.506 STDOUT terraform:  # (config refers to values not yet known) 2025-04-13 00:01:18.506374 | orchestrator | 00:01:18.506 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-04-13 00:01:18.506454 | orchestrator | 00:01:18.506 STDOUT terraform:  + checksum = (known after apply) 2025-04-13 00:01:18.506534 | orchestrator | 00:01:18.506 STDOUT terraform:  + created_at = (known after apply) 2025-04-13 00:01:18.506615 | orchestrator | 00:01:18.506 STDOUT terraform:  + file = (known after apply) 2025-04-13 00:01:18.506695 | orchestrator | 00:01:18.506 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.506774 | orchestrator | 00:01:18.506 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.506851 | orchestrator | 00:01:18.506 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-04-13 00:01:18.506934 | orchestrator | 00:01:18.506 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-04-13 00:01:18.506990 | orchestrator | 00:01:18.506 STDOUT terraform:  + most_recent = true 2025-04-13 00:01:18.507066 | orchestrator | 00:01:18.506 STDOUT terraform:  + name = (known after apply) 2025-04-13 00:01:18.507173 | orchestrator | 00:01:18.507 STDOUT terraform:  + protected = (known after apply) 2025-04-13 00:01:18.507251 | orchestrator | 00:01:18.507 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.507331 | orchestrator | 00:01:18.507 STDOUT terraform:  + schema = (known after apply) 2025-04-13 00:01:18.507412 | orchestrator | 00:01:18.507 STDOUT terraform:  + size_bytes = (known after apply) 2025-04-13 00:01:18.507493 | orchestrator | 00:01:18.507 STDOUT terraform:  + tags = (known after apply) 2025-04-13 00:01:18.507593 | orchestrator | 00:01:18.507 STDOUT terraform:  + updated_at = (known after apply) 2025-04-13 00:01:18.507646 | orchestrator | 00:01:18.507 STDOUT terraform:  } 2025-04-13 00:01:18.507860 | orchestrator | 00:01:18.507 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-04-13 00:01:18.507946 | orchestrator | 00:01:18.507 STDOUT terraform:  # (config refers to values not yet known) 2025-04-13 00:01:18.508047 | orchestrator | 00:01:18.507 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-04-13 00:01:18.508183 | orchestrator | 00:01:18.508 STDOUT terraform:  + checksum = (known after apply) 2025-04-13 00:01:18.508258 | orchestrator | 00:01:18.508 STDOUT terraform:  + created_at = (known after apply) 2025-04-13 00:01:18.508341 | orchestrator | 00:01:18.508 STDOUT terraform:  + file = (known after apply) 2025-04-13 00:01:18.508422 | orchestrator | 00:01:18.508 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.508500 | orchestrator | 00:01:18.508 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.508581 | orchestrator | 00:01:18.508 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-04-13 00:01:18.508659 | orchestrator | 00:01:18.508 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-04-13 00:01:18.508705 | orchestrator | 00:01:18.508 STDOUT terraform:  + most_recent = true 2025-04-13 00:01:18.508776 | orchestrator | 00:01:18.508 STDOUT terraform:  + name = (known after apply) 2025-04-13 00:01:18.508845 | orchestrator | 00:01:18.508 STDOUT terraform:  + protected = (known after apply) 2025-04-13 00:01:18.508917 | orchestrator | 00:01:18.508 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.508990 | orchestrator | 00:01:18.508 STDOUT terraform:  + schema = (known after apply) 2025-04-13 00:01:18.509098 | orchestrator | 00:01:18.508 STDOUT terraform:  + size_bytes = (known after apply) 2025-04-13 00:01:18.509207 | orchestrator | 00:01:18.509 STDOUT terraform:  + tags = (known after apply) 2025-04-13 00:01:18.509318 | orchestrator | 00:01:18.509 STDOUT terraform:  + updated_at = (known after apply) 2025-04-13 00:01:18.509355 | orchestrator | 00:01:18.509 STDOUT terraform:  } 2025-04-13 00:01:18.509457 | orchestrator | 00:01:18.509 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-04-13 00:01:18.509536 | orchestrator | 00:01:18.509 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-04-13 00:01:18.509629 | orchestrator | 00:01:18.509 STDOUT terraform:  + content = (known after apply) 2025-04-13 00:01:18.509719 | orchestrator | 00:01:18.509 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-13 00:01:18.509805 | orchestrator | 00:01:18.509 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-13 00:01:18.509893 | orchestrator | 00:01:18.509 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-13 00:01:18.509984 | orchestrator | 00:01:18.509 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-13 00:01:18.510161 | orchestrator | 00:01:18.509 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-13 00:01:18.510250 | orchestrator | 00:01:18.510 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-13 00:01:18.510348 | orchestrator | 00:01:18.510 STDOUT terraform:  + directory_permission = "0777" 2025-04-13 00:01:18.510436 | orchestrator | 00:01:18.510 STDOUT terraform:  + file_permission = "0644" 2025-04-13 00:01:18.510506 | orchestrator | 00:01:18.510 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-04-13 00:01:18.510581 | orchestrator | 00:01:18.510 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.510608 | orchestrator | 00:01:18.510 STDOUT terraform:  } 2025-04-13 00:01:18.510664 | orchestrator | 00:01:18.510 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-04-13 00:01:18.510718 | orchestrator | 00:01:18.510 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-04-13 00:01:18.510793 | orchestrator | 00:01:18.510 STDOUT terraform:  + content = (known after apply) 2025-04-13 00:01:18.510864 | orchestrator | 00:01:18.510 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-13 00:01:18.510937 | orchestrator | 00:01:18.510 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-13 00:01:18.511009 | orchestrator | 00:01:18.510 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-13 00:01:18.511122 | orchestrator | 00:01:18.511 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-13 00:01:18.511167 | orchestrator | 00:01:18.511 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-13 00:01:18.511240 | orchestrator | 00:01:18.511 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-13 00:01:18.511288 | orchestrator | 00:01:18.511 STDOUT terraform:  + directory_permission = "0777" 2025-04-13 00:01:18.511337 | orchestrator | 00:01:18.511 STDOUT terraform:  + file_permission = "0644" 2025-04-13 00:01:18.511403 | orchestrator | 00:01:18.511 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-04-13 00:01:18.511512 | orchestrator | 00:01:18.511 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.511542 | orchestrator | 00:01:18.511 STDOUT terraform:  } 2025-04-13 00:01:18.511594 | orchestrator | 00:01:18.511 STDOUT terraform:  # local_file.inventory will be created 2025-04-13 00:01:18.511645 | orchestrator | 00:01:18.511 STDOUT terraform:  + resource "local_file" "inventory" { 2025-04-13 00:01:18.511718 | orchestrator | 00:01:18.511 STDOUT terraform:  + content = (known after apply) 2025-04-13 00:01:18.511791 | orchestrator | 00:01:18.511 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-13 00:01:18.511861 | orchestrator | 00:01:18.511 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-13 00:01:18.511935 | orchestrator | 00:01:18.511 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-13 00:01:18.512009 | orchestrator | 00:01:18.511 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-13 00:01:18.512096 | orchestrator | 00:01:18.512 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-13 00:01:18.512164 | orchestrator | 00:01:18.512 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-13 00:01:18.512215 | orchestrator | 00:01:18.512 STDOUT terraform:  + directory_permission = "0777" 2025-04-13 00:01:18.512265 | orchestrator | 00:01:18.512 STDOUT terraform:  + file_permission = "0644" 2025-04-13 00:01:18.512331 | orchestrator | 00:01:18.512 STDOUT terraform:  + filename = "inventory.ci" 2025-04-13 00:01:18.512401 | orchestrator | 00:01:18.512 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.512428 | orchestrator | 00:01:18.512 STDOUT terraform:  } 2025-04-13 00:01:18.512489 | orchestrator | 00:01:18.512 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-04-13 00:01:18.512552 | orchestrator | 00:01:18.512 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-04-13 00:01:18.512617 | orchestrator | 00:01:18.512 STDOUT terraform:  + content = (sensitive value) 2025-04-13 00:01:18.512689 | orchestrator | 00:01:18.512 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-13 00:01:18.512763 | orchestrator | 00:01:18.512 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-13 00:01:18.512835 | orchestrator | 00:01:18.512 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-13 00:01:18.512915 | orchestrator | 00:01:18.512 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-13 00:01:18.512977 | orchestrator | 00:01:18.512 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-13 00:01:18.513047 | orchestrator | 00:01:18.512 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-13 00:01:18.513134 | orchestrator | 00:01:18.513 STDOUT terraform:  + directory_permission = "0700" 2025-04-13 00:01:18.513182 | orchestrator | 00:01:18.513 STDOUT terraform:  + file_permission = "0600" 2025-04-13 00:01:18.513244 | orchestrator | 00:01:18.513 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-04-13 00:01:18.513315 | orchestrator | 00:01:18.513 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.513337 | orchestrator | 00:01:18.513 STDOUT terraform:  } 2025-04-13 00:01:18.513389 | orchestrator | 00:01:18.513 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-04-13 00:01:18.513441 | orchestrator | 00:01:18.513 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-04-13 00:01:18.513476 | orchestrator | 00:01:18.513 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.513499 | orchestrator | 00:01:18.513 STDOUT terraform:  } 2025-04-13 00:01:18.513652 | orchestrator | 00:01:18.513 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-04-13 00:01:18.513734 | orchestrator | 00:01:18.513 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-04-13 00:01:18.513789 | orchestrator | 00:01:18.513 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.513824 | orchestrator | 00:01:18.513 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.513878 | orchestrator | 00:01:18.513 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.513932 | orchestrator | 00:01:18.513 STDOUT terraform:  + image_id = (known after apply) 2025-04-13 00:01:18.513983 | orchestrator | 00:01:18.513 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.514071 | orchestrator | 00:01:18.513 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-04-13 00:01:18.514135 | orchestrator | 00:01:18.514 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.514171 | orchestrator | 00:01:18.514 STDOUT terraform:  + size = 80 2025-04-13 00:01:18.514208 | orchestrator | 00:01:18.514 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.514232 | orchestrator | 00:01:18.514 STDOUT terraform:  } 2025-04-13 00:01:18.514313 | orchestrator | 00:01:18.514 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-04-13 00:01:18.514394 | orchestrator | 00:01:18.514 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-13 00:01:18.514447 | orchestrator | 00:01:18.514 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.514483 | orchestrator | 00:01:18.514 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.514538 | orchestrator | 00:01:18.514 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.514591 | orchestrator | 00:01:18.514 STDOUT terraform:  + image_id = (known after apply) 2025-04-13 00:01:18.514643 | orchestrator | 00:01:18.514 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.514710 | orchestrator | 00:01:18.514 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-04-13 00:01:18.514763 | orchestrator | 00:01:18.514 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.514797 | orchestrator | 00:01:18.514 STDOUT terraform:  + size = 80 2025-04-13 00:01:18.514832 | orchestrator | 00:01:18.514 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.514856 | orchestrator | 00:01:18.514 STDOUT terraform:  } 2025-04-13 00:01:18.514936 | orchestrator | 00:01:18.514 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-04-13 00:01:18.515013 | orchestrator | 00:01:18.514 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-13 00:01:18.515066 | orchestrator | 00:01:18.515 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.515116 | orchestrator | 00:01:18.515 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.515170 | orchestrator | 00:01:18.515 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.515223 | orchestrator | 00:01:18.515 STDOUT terraform:  + image_id = (known after apply) 2025-04-13 00:01:18.515275 | orchestrator | 00:01:18.515 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.515342 | orchestrator | 00:01:18.515 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-04-13 00:01:18.515399 | orchestrator | 00:01:18.515 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.515432 | orchestrator | 00:01:18.515 STDOUT terraform:  + size = 80 2025-04-13 00:01:18.515467 | orchestrator | 00:01:18.515 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.515491 | orchestrator | 00:01:18.515 STDOUT terraform:  } 2025-04-13 00:01:18.515573 | orchestrator | 00:01:18.515 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-04-13 00:01:18.515651 | orchestrator | 00:01:18.515 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-13 00:01:18.515705 | orchestrator | 00:01:18.515 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.515741 | orchestrator | 00:01:18.515 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.515796 | orchestrator | 00:01:18.515 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.515883 | orchestrator | 00:01:18.515 STDOUT terraform:  + image_id = (known after apply) 2025-04-13 00:01:18.515935 | orchestrator | 00:01:18.515 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.516003 | orchestrator | 00:01:18.515 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-04-13 00:01:18.516058 | orchestrator | 00:01:18.516 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.516126 | orchestrator | 00:01:18.516 STDOUT terraform:  + size = 80 2025-04-13 00:01:18.516163 | orchestrator | 00:01:18.516 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.516185 | orchestrator | 00:01:18.516 STDOUT terraform:  } 2025-04-13 00:01:18.516266 | orchestrator | 00:01:18.516 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-04-13 00:01:18.516346 | orchestrator | 00:01:18.516 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-13 00:01:18.516397 | orchestrator | 00:01:18.516 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.516433 | orchestrator | 00:01:18.516 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.516486 | orchestrator | 00:01:18.516 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.516540 | orchestrator | 00:01:18.516 STDOUT terraform:  + image_id = (known after apply) 2025-04-13 00:01:18.516594 | orchestrator | 00:01:18.516 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.516660 | orchestrator | 00:01:18.516 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-04-13 00:01:18.516709 | orchestrator | 00:01:18.516 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.516743 | orchestrator | 00:01:18.516 STDOUT terraform:  + size = 80 2025-04-13 00:01:18.516776 | orchestrator | 00:01:18.516 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.516798 | orchestrator | 00:01:18.516 STDOUT terraform:  } 2025-04-13 00:01:18.516874 | orchestrator | 00:01:18.516 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-04-13 00:01:18.516947 | orchestrator | 00:01:18.516 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-13 00:01:18.516996 | orchestrator | 00:01:18.516 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.517030 | orchestrator | 00:01:18.516 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.517091 | orchestrator | 00:01:18.517 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.517136 | orchestrator | 00:01:18.517 STDOUT terraform:  + image_id = (known after apply) 2025-04-13 00:01:18.517186 | orchestrator | 00:01:18.517 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.517251 | orchestrator | 00:01:18.517 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-04-13 00:01:18.517301 | orchestrator | 00:01:18.517 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.517334 | orchestrator | 00:01:18.517 STDOUT terraform:  + size = 80 2025-04-13 00:01:18.517367 | orchestrator | 00:01:18.517 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.517390 | orchestrator | 00:01:18.517 STDOUT terraform:  } 2025-04-13 00:01:18.517463 | orchestrator | 00:01:18.517 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-04-13 00:01:18.517536 | orchestrator | 00:01:18.517 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-13 00:01:18.517587 | orchestrator | 00:01:18.517 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.517619 | orchestrator | 00:01:18.517 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.517669 | orchestrator | 00:01:18.517 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.517717 | orchestrator | 00:01:18.517 STDOUT terraform:  + image_id = (known after apply) 2025-04-13 00:01:18.517768 | orchestrator | 00:01:18.517 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.517846 | orchestrator | 00:01:18.517 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-04-13 00:01:18.517896 | orchestrator | 00:01:18.517 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.517930 | orchestrator | 00:01:18.517 STDOUT terraform:  + size = 80 2025-04-13 00:01:18.517965 | orchestrator | 00:01:18.517 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.517986 | orchestrator | 00:01:18.517 STDOUT terraform:  } 2025-04-13 00:01:18.518090 | orchestrator | 00:01:18.517 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-04-13 00:01:18.518173 | orchestrator | 00:01:18.518 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-13 00:01:18.518222 | orchestrator | 00:01:18.518 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.518257 | orchestrator | 00:01:18.518 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.518311 | orchestrator | 00:01:18.518 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.518359 | orchestrator | 00:01:18.518 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.518419 | orchestrator | 00:01:18.518 STDOUT terraform:  + name = "testbed-volume-0-node-0" 2025-04-13 00:01:18.518468 | orchestrator | 00:01:18.518 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.518503 | orchestrator | 00:01:18.518 STDOUT terraform:  + size = 20 2025-04-13 00:01:18.518538 | orchestrator | 00:01:18.518 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.518571 | orchestrator | 00:01:18.518 STDOUT terraform:  } 2025-04-13 00:01:18.518686 | orchestrator | 00:01:18.518 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-04-13 00:01:18.518765 | orchestrator | 00:01:18.518 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-13 00:01:18.518817 | orchestrator | 00:01:18.518 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.518857 | orchestrator | 00:01:18.518 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.518909 | orchestrator | 00:01:18.518 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.518958 | orchestrator | 00:01:18.518 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.519018 | orchestrator | 00:01:18.518 STDOUT terraform:  + name = "testbed-volume-1-node-1" 2025-04-13 00:01:18.519067 | orchestrator | 00:01:18.519 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.519115 | orchestrator | 00:01:18.519 STDOUT terraform:  + size = 20 2025-04-13 00:01:18.519144 | orchestrator | 00:01:18.519 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.519166 | orchestrator | 00:01:18.519 STDOUT terraform:  } 2025-04-13 00:01:18.519239 | orchestrator | 00:01:18.519 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-04-13 00:01:18.519308 | orchestrator | 00:01:18.519 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-13 00:01:18.519357 | orchestrator | 00:01:18.519 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.519390 | orchestrator | 00:01:18.519 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.519441 | orchestrator | 00:01:18.519 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.519490 | orchestrator | 00:01:18.519 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.519551 | orchestrator | 00:01:18.519 STDOUT terraform:  + name = "testbed-volume-2-node-2" 2025-04-13 00:01:18.519600 | orchestrator | 00:01:18.519 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.519634 | orchestrator | 00:01:18.519 STDOUT terraform:  + size = 20 2025-04-13 00:01:18.519666 | orchestrator | 00:01:18.519 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.519687 | orchestrator | 00:01:18.519 STDOUT terraform:  } 2025-04-13 00:01:18.519758 | orchestrator | 00:01:18.519 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-04-13 00:01:18.519830 | orchestrator | 00:01:18.519 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-13 00:01:18.519879 | orchestrator | 00:01:18.519 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.519912 | orchestrator | 00:01:18.519 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.519963 | orchestrator | 00:01:18.519 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.520012 | orchestrator | 00:01:18.519 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.520089 | orchestrator | 00:01:18.520 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-04-13 00:01:18.520153 | orchestrator | 00:01:18.520 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.520185 | orchestrator | 00:01:18.520 STDOUT terraform:  + size = 20 2025-04-13 00:01:18.520219 | orchestrator | 00:01:18.520 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.520240 | orchestrator | 00:01:18.520 STDOUT terraform:  } 2025-04-13 00:01:18.520314 | orchestrator | 00:01:18.520 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-04-13 00:01:18.520376 | orchestrator | 00:01:18.520 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-13 00:01:18.520419 | orchestrator | 00:01:18.520 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.520449 | orchestrator | 00:01:18.520 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.520494 | orchestrator | 00:01:18.520 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.520539 | orchestrator | 00:01:18.520 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.520592 | orchestrator | 00:01:18.520 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-04-13 00:01:18.520635 | orchestrator | 00:01:18.520 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.520665 | orchestrator | 00:01:18.520 STDOUT terraform:  + size = 20 2025-04-13 00:01:18.520695 | orchestrator | 00:01:18.520 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.520714 | orchestrator | 00:01:18.520 STDOUT terraform:  } 2025-04-13 00:01:18.520778 | orchestrator | 00:01:18.520 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-04-13 00:01:18.520838 | orchestrator | 00:01:18.520 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-13 00:01:18.520882 | orchestrator | 00:01:18.520 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.520913 | orchestrator | 00:01:18.520 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.520964 | orchestrator | 00:01:18.520 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.521011 | orchestrator | 00:01:18.520 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.521061 | orchestrator | 00:01:18.521 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-04-13 00:01:18.521116 | orchestrator | 00:01:18.521 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.521144 | orchestrator | 00:01:18.521 STDOUT terraform:  + size = 20 2025-04-13 00:01:18.521174 | orchestrator | 00:01:18.521 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.521192 | orchestrator | 00:01:18.521 STDOUT terraform:  } 2025-04-13 00:01:18.521255 | orchestrator | 00:01:18.521 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-04-13 00:01:18.521318 | orchestrator | 00:01:18.521 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-13 00:01:18.521361 | orchestrator | 00:01:18.521 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.521390 | orchestrator | 00:01:18.521 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.521435 | orchestrator | 00:01:18.521 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.521479 | orchestrator | 00:01:18.521 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.521532 | orchestrator | 00:01:18.521 STDOUT terraform:  + name = "testbed-volume-6-node-0" 2025-04-13 00:01:18.521576 | orchestrator | 00:01:18.521 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.521605 | orchestrator | 00:01:18.521 STDOUT terraform:  + size = 20 2025-04-13 00:01:18.521639 | orchestrator | 00:01:18.521 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.521647 | orchestrator | 00:01:18.521 STDOUT terraform:  } 2025-04-13 00:01:18.521715 | orchestrator | 00:01:18.521 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-04-13 00:01:18.521777 | orchestrator | 00:01:18.521 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-13 00:01:18.521820 | orchestrator | 00:01:18.521 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.521849 | orchestrator | 00:01:18.521 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.521893 | orchestrator | 00:01:18.521 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.521936 | orchestrator | 00:01:18.521 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.521990 | orchestrator | 00:01:18.521 STDOUT terraform:  + name = "testbed-volume-7-node-1" 2025-04-13 00:01:18.522051 | orchestrator | 00:01:18.521 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.522113 | orchestrator | 00:01:18.522 STDOUT terraform:  + size = 20 2025-04-13 00:01:18.522121 | orchestrator | 00:01:18.522 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.522127 | orchestrator | 00:01:18.522 STDOUT terraform:  } 2025-04-13 00:01:18.522188 | orchestrator | 00:01:18.522 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-04-13 00:01:18.522248 | orchestrator | 00:01:18.522 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-13 00:01:18.522292 | orchestrator | 00:01:18.522 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.522322 | orchestrator | 00:01:18.522 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.522366 | orchestrator | 00:01:18.522 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.522408 | orchestrator | 00:01:18.522 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.522461 | orchestrator | 00:01:18.522 STDOUT terraform:  + name = "testbed-volume-8-node-2" 2025-04-13 00:01:18.522506 | orchestrator | 00:01:18.522 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.522534 | orchestrator | 00:01:18.522 STDOUT terraform:  + size = 20 2025-04-13 00:01:18.522565 | orchestrator | 00:01:18.522 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.522583 | orchestrator | 00:01:18.522 STDOUT terraform:  } 2025-04-13 00:01:18.522648 | orchestrator | 00:01:18.522 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[9] will be created 2025-04-13 00:01:18.522710 | orchestrator | 00:01:18.522 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-13 00:01:18.522753 | orchestrator | 00:01:18.522 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.522782 | orchestrator | 00:01:18.522 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.522827 | orchestrator | 00:01:18.522 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.522870 | orchestrator | 00:01:18.522 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.522925 | orchestrator | 00:01:18.522 STDOUT terraform:  + name = "testbed-volume-9-node-3" 2025-04-13 00:01:18.522968 | orchestrator | 00:01:18.522 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.522996 | orchestrator | 00:01:18.522 STDOUT terraform:  + size = 20 2025-04-13 00:01:18.523026 | orchestrator | 00:01:18.522 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.523045 | orchestrator | 00:01:18.523 STDOUT terraform:  } 2025-04-13 00:01:18.523124 | orchestrator | 00:01:18.523 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[10] will be created 2025-04-13 00:01:18.523183 | orchestrator | 00:01:18.523 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-13 00:01:18.523226 | orchestrator | 00:01:18.523 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.523256 | orchestrator | 00:01:18.523 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.523300 | orchestrator | 00:01:18.523 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.523346 | orchestrator | 00:01:18.523 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.523398 | orchestrator | 00:01:18.523 STDOUT terraform:  + name = "testbed-volume-10-node-4" 2025-04-13 00:01:18.523442 | orchestrator | 00:01:18.523 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.523472 | orchestrator | 00:01:18.523 STDOUT terraform:  + size = 20 2025-04-13 00:01:18.523503 | orchestrator | 00:01:18.523 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.523521 | orchestrator | 00:01:18.523 STDOUT terraform:  } 2025-04-13 00:01:18.523585 | orchestrator | 00:01:18.523 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[11] will be created 2025-04-13 00:01:18.523645 | orchestrator | 00:01:18.523 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-13 00:01:18.523689 | orchestrator | 00:01:18.523 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.523717 | orchestrator | 00:01:18.523 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.523762 | orchestrator | 00:01:18.523 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.523806 | orchestrator | 00:01:18.523 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.523860 | orchestrator | 00:01:18.523 STDOUT terraform:  + name = "testbed-volume-11-node-5" 2025-04-13 00:01:18.523911 | orchestrator | 00:01:18.523 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.523938 | orchestrator | 00:01:18.523 STDOUT terraform:  + size = 20 2025-04-13 00:01:18.523968 | orchestrator | 00:01:18.523 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.523986 | orchestrator | 00:01:18.523 STDOUT terraform:  } 2025-04-13 00:01:18.524049 | orchestrator | 00:01:18.523 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[12] will be created 2025-04-13 00:01:18.524137 | orchestrator | 00:01:18.524 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-13 00:01:18.524180 | orchestrator | 00:01:18.524 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.524211 | orchestrator | 00:01:18.524 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.524255 | orchestrator | 00:01:18.524 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.524298 | orchestrator | 00:01:18.524 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.524351 | orchestrator | 00:01:18.524 STDOUT terraform:  + name = "testbed-volume-12-node-0" 2025-04-13 00:01:18.524394 | orchestrator | 00:01:18.524 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.524423 | orchestrator | 00:01:18.524 STDOUT terraform:  + size = 20 2025-04-13 00:01:18.524453 | orchestrator | 00:01:18.524 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.524473 | orchestrator | 00:01:18.524 STDOUT terraform:  } 2025-04-13 00:01:18.524537 | orchestrator | 00:01:18.524 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[13] will be created 2025-04-13 00:01:18.524599 | orchestrator | 00:01:18.524 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-13 00:01:18.524641 | orchestrator | 00:01:18.524 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.524669 | orchestrator | 00:01:18.524 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.524709 | orchestrator | 00:01:18.524 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.524748 | orchestrator | 00:01:18.524 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.524797 | orchestrator | 00:01:18.524 STDOUT terraform:  + name = "testbed-volume-13-node-1" 2025-04-13 00:01:18.524836 | orchestrator | 00:01:18.524 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.524862 | orchestrator | 00:01:18.524 STDOUT terraform:  + size = 20 2025-04-13 00:01:18.524889 | orchestrator | 00:01:18.524 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.524906 | orchestrator | 00:01:18.524 STDOUT terraform:  } 2025-04-13 00:01:18.524964 | orchestrator | 00:01:18.524 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[14] will be created 2025-04-13 00:01:18.525019 | orchestrator | 00:01:18.524 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-13 00:01:18.525058 | orchestrator | 00:01:18.525 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.525095 | orchestrator | 00:01:18.525 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.525133 | orchestrator | 00:01:18.525 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.525172 | orchestrator | 00:01:18.525 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.525221 | orchestrator | 00:01:18.525 STDOUT terraform:  + name = "testbed-volume-14-node-2" 2025-04-13 00:01:18.525260 | orchestrator | 00:01:18.525 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.525286 | orchestrator | 00:01:18.525 STDOUT terraform:  + size = 20 2025-04-13 00:01:18.525312 | orchestrator | 00:01:18.525 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.525330 | orchestrator | 00:01:18.525 STDOUT terraform:  } 2025-04-13 00:01:18.525387 | orchestrator | 00:01:18.525 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[15] will be created 2025-04-13 00:01:18.525441 | orchestrator | 00:01:18.525 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-13 00:01:18.525481 | orchestrator | 00:01:18.525 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.525508 | orchestrator | 00:01:18.525 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.525548 | orchestrator | 00:01:18.525 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.525590 | orchestrator | 00:01:18.525 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.525635 | orchestrator | 00:01:18.525 STDOUT terraform:  + name = "testbed-volume-15-node-3" 2025-04-13 00:01:18.525675 | orchestrator | 00:01:18.525 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.525700 | orchestrator | 00:01:18.525 STDOUT terraform:  + size = 20 2025-04-13 00:01:18.525727 | orchestrator | 00:01:18.525 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.525743 | orchestrator | 00:01:18.525 STDOUT terraform:  } 2025-04-13 00:01:18.525801 | orchestrator | 00:01:18.525 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[16] will be created 2025-04-13 00:01:18.525855 | orchestrator | 00:01:18.525 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-13 00:01:18.525894 | orchestrator | 00:01:18.525 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.525919 | orchestrator | 00:01:18.525 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.525961 | orchestrator | 00:01:18.525 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.525999 | orchestrator | 00:01:18.525 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.526063 | orchestrator | 00:01:18.525 STDOUT terraform:  + name = "testbed-volume-16-node-4" 2025-04-13 00:01:18.526116 | orchestrator | 00:01:18.526 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.526141 | orchestrator | 00:01:18.526 STDOUT terraform:  + size = 20 2025-04-13 00:01:18.526168 | orchestrator | 00:01:18.526 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.526185 | orchestrator | 00:01:18.526 STDOUT terraform:  } 2025-04-13 00:01:18.526242 | orchestrator | 00:01:18.526 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[17] will be created 2025-04-13 00:01:18.526296 | orchestrator | 00:01:18.526 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-13 00:01:18.526334 | orchestrator | 00:01:18.526 STDOUT terraform:  + attachment = (known after apply) 2025-04-13 00:01:18.526361 | orchestrator | 00:01:18.526 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.526400 | orchestrator | 00:01:18.526 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.526439 | orchestrator | 00:01:18.526 STDOUT terraform:  + metadata = (known after apply) 2025-04-13 00:01:18.526487 | orchestrator | 00:01:18.526 STDOUT terraform:  + name = "testbed-volume-17-node-5" 2025-04-13 00:01:18.526525 | orchestrator | 00:01:18.526 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.526550 | orchestrator | 00:01:18.526 STDOUT terraform:  + size = 20 2025-04-13 00:01:18.526578 | orchestrator | 00:01:18.526 STDOUT terraform:  + volume_type = "ssd" 2025-04-13 00:01:18.526595 | orchestrator | 00:01:18.526 STDOUT terraform:  } 2025-04-13 00:01:18.526652 | orchestrator | 00:01:18.526 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-04-13 00:01:18.526706 | orchestrator | 00:01:18.526 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-04-13 00:01:18.526749 | orchestrator | 00:01:18.526 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-13 00:01:18.526794 | orchestrator | 00:01:18.526 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-13 00:01:18.526838 | orchestrator | 00:01:18.526 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-13 00:01:18.526882 | orchestrator | 00:01:18.526 STDOUT terraform:  + all_tags = (known after apply) 2025-04-13 00:01:18.526912 | orchestrator | 00:01:18.526 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.526939 | orchestrator | 00:01:18.526 STDOUT terraform:  + config_drive = true 2025-04-13 00:01:18.526984 | orchestrator | 00:01:18.526 STDOUT terraform:  + created = (known after apply) 2025-04-13 00:01:18.527030 | orchestrator | 00:01:18.526 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-13 00:01:18.527067 | orchestrator | 00:01:18.527 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-04-13 00:01:18.527109 | orchestrator | 00:01:18.527 STDOUT terraform:  + force_delete = false 2025-04-13 00:01:18.527153 | orchestrator | 00:01:18.527 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.527198 | orchestrator | 00:01:18.527 STDOUT terraform:  + image_id = (known after apply) 2025-04-13 00:01:18.527243 | orchestrator | 00:01:18.527 STDOUT terraform:  + image_name = (known after apply) 2025-04-13 00:01:18.527274 | orchestrator | 00:01:18.527 STDOUT terraform:  + key_pair = "testbed" 2025-04-13 00:01:18.527314 | orchestrator | 00:01:18.527 STDOUT terraform:  + name = "testbed-manager" 2025-04-13 00:01:18.527345 | orchestrator | 00:01:18.527 STDOUT terraform:  + power_state = "active" 2025-04-13 00:01:18.527390 | orchestrator | 00:01:18.527 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.527434 | orchestrator | 00:01:18.527 STDOUT terraform:  + security_groups = (known after apply) 2025-04-13 00:01:18.527464 | orchestrator | 00:01:18.527 STDOUT terraform:  + stop_before_destroy = false 2025-04-13 00:01:18.527509 | orchestrator | 00:01:18.527 STDOUT terraform:  + updated = (known after apply) 2025-04-13 00:01:18.527556 | orchestrator | 00:01:18.527 STDOUT terraform:  + user_data = (known after apply) 2025-04-13 00:01:18.527573 | orchestrator | 00:01:18.527 STDOUT terraform:  + block_device { 2025-04-13 00:01:18.527606 | orchestrator | 00:01:18.527 STDOUT terraform:  + boot_index = 0 2025-04-13 00:01:18.527640 | orchestrator | 00:01:18.527 STDOUT terraform:  + delete_on_termination = false 2025-04-13 00:01:18.527678 | orchestrator | 00:01:18.527 STDOUT terraform:  + destination_type = "volume" 2025-04-13 00:01:18.527714 | orchestrator | 00:01:18.527 STDOUT terraform:  + multiattach = false 2025-04-13 00:01:18.527751 | orchestrator | 00:01:18.527 STDOUT terraform:  + source_type = "volume" 2025-04-13 00:01:18.527799 | orchestrator | 00:01:18.527 STDOUT terraform:  + uuid = (known after apply) 2025-04-13 00:01:18.527822 | orchestrator | 00:01:18.527 STDOUT terraform:  } 2025-04-13 00:01:18.527829 | orchestrator | 00:01:18.527 STDOUT terraform:  + network { 2025-04-13 00:01:18.527857 | orchestrator | 00:01:18.527 STDOUT terraform:  + access_network = false 2025-04-13 00:01:18.527897 | orchestrator | 00:01:18.527 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-13 00:01:18.527936 | orchestrator | 00:01:18.527 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-13 00:01:18.527975 | orchestrator | 00:01:18.527 STDOUT terraform:  + mac = (known after apply) 2025-04-13 00:01:18.528014 | orchestrator | 00:01:18.527 STDOUT terraform:  + name = (known after apply) 2025-04-13 00:01:18.528054 | orchestrator | 00:01:18.528 STDOUT terraform:  + port = (known after apply) 2025-04-13 00:01:18.528117 | orchestrator | 00:01:18.528 STDOUT terraform:  + uuid = (known after apply) 2025-04-13 00:01:18.528134 | orchestrator | 00:01:18.528 STDOUT terraform:  } 2025-04-13 00:01:18.528153 | orchestrator | 00:01:18.528 STDOUT terraform:  } 2025-04-13 00:01:18.528209 | orchestrator | 00:01:18.528 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-04-13 00:01:18.528262 | orchestrator | 00:01:18.528 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-13 00:01:18.528306 | orchestrator | 00:01:18.528 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-13 00:01:18.528351 | orchestrator | 00:01:18.528 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-13 00:01:18.528394 | orchestrator | 00:01:18.528 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-13 00:01:18.528439 | orchestrator | 00:01:18.528 STDOUT terraform:  + all_tags = (known after apply) 2025-04-13 00:01:18.528468 | orchestrator | 00:01:18.528 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.528494 | orchestrator | 00:01:18.528 STDOUT terraform:  + config_drive = true 2025-04-13 00:01:18.528540 | orchestrator | 00:01:18.528 STDOUT terraform:  + created = (known after apply) 2025-04-13 00:01:18.528585 | orchestrator | 00:01:18.528 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-13 00:01:18.528622 | orchestrator | 00:01:18.528 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-13 00:01:18.528652 | orchestrator | 00:01:18.528 STDOUT terraform:  + force_delete = false 2025-04-13 00:01:18.528696 | orchestrator | 00:01:18.528 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.528737 | orchestrator | 00:01:18.528 STDOUT terraform:  + image_id = (known after apply) 2025-04-13 00:01:18.528780 | orchestrator | 00:01:18.528 STDOUT terraform:  + image_name = (known after apply) 2025-04-13 00:01:18.528809 | orchestrator | 00:01:18.528 STDOUT terraform:  + key_pair = "testbed" 2025-04-13 00:01:18.528846 | orchestrator | 00:01:18.528 STDOUT terraform:  + name = "testbed-node-0" 2025-04-13 00:01:18.528875 | orchestrator | 00:01:18.528 STDOUT terraform:  + power_state = "active" 2025-04-13 00:01:18.528918 | orchestrator | 00:01:18.528 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.528959 | orchestrator | 00:01:18.528 STDOUT terraform:  + security_groups = (known after apply) 2025-04-13 00:01:18.528987 | orchestrator | 00:01:18.528 STDOUT terraform:  + stop_before_destroy = false 2025-04-13 00:01:18.529029 | orchestrator | 00:01:18.528 STDOUT terraform:  + updated = (known after apply) 2025-04-13 00:01:18.529123 | orchestrator | 00:01:18.529 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-13 00:01:18.529155 | orchestrator | 00:01:18.529 STDOUT terraform:  + block_device { 2025-04-13 00:01:18.529198 | orchestrator | 00:01:18.529 STDOUT terraform:  + boot_index = 0 2025-04-13 00:01:18.529231 | orchestrator | 00:01:18.529 STDOUT terraform:  + delete_on_termination = false 2025-04-13 00:01:18.529269 | orchestrator | 00:01:18.529 STDOUT terraform:  + destination_type = "volume" 2025-04-13 00:01:18.529304 | orchestrator | 00:01:18.529 STDOUT terraform:  + multiattach = false 2025-04-13 00:01:18.529340 | orchestrator | 00:01:18.529 STDOUT terraform:  + source_type = "volume" 2025-04-13 00:01:18.529387 | orchestrator | 00:01:18.529 STDOUT terraform:  + uuid = (known after apply) 2025-04-13 00:01:18.529404 | orchestrator | 00:01:18.529 STDOUT terraform:  } 2025-04-13 00:01:18.529423 | orchestrator | 00:01:18.529 STDOUT terraform:  + network { 2025-04-13 00:01:18.529448 | orchestrator | 00:01:18.529 STDOUT terraform:  + access_network = false 2025-04-13 00:01:18.529485 | orchestrator | 00:01:18.529 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-13 00:01:18.529523 | orchestrator | 00:01:18.529 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-13 00:01:18.529560 | orchestrator | 00:01:18.529 STDOUT terraform:  + mac = (known after apply) 2025-04-13 00:01:18.529600 | orchestrator | 00:01:18.529 STDOUT terraform:  + name = (known after apply) 2025-04-13 00:01:18.529640 | orchestrator | 00:01:18.529 STDOUT terraform:  + port = (known after apply) 2025-04-13 00:01:18.529675 | orchestrator | 00:01:18.529 STDOUT terraform:  + uuid = (known after apply) 2025-04-13 00:01:18.529689 | orchestrator | 00:01:18.529 STDOUT terraform:  } 2025-04-13 00:01:18.529696 | orchestrator | 00:01:18.529 STDOUT terraform:  } 2025-04-13 00:01:18.529751 | orchestrator | 00:01:18.529 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-04-13 00:01:18.529803 | orchestrator | 00:01:18.529 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-13 00:01:18.529849 | orchestrator | 00:01:18.529 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-13 00:01:18.529890 | orchestrator | 00:01:18.529 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-13 00:01:18.529933 | orchestrator | 00:01:18.529 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-13 00:01:18.529975 | orchestrator | 00:01:18.529 STDOUT terraform:  + all_tags = (known after apply) 2025-04-13 00:01:18.530043 | orchestrator | 00:01:18.529 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.530055 | orchestrator | 00:01:18.530 STDOUT terraform:  + config_drive = true 2025-04-13 00:01:18.530114 | orchestrator | 00:01:18.530 STDOUT terraform:  + created = (known after apply) 2025-04-13 00:01:18.530155 | orchestrator | 00:01:18.530 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-13 00:01:18.530195 | orchestrator | 00:01:18.530 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-13 00:01:18.530219 | orchestrator | 00:01:18.530 STDOUT terraform:  + force_delete = false 2025-04-13 00:01:18.530263 | orchestrator | 00:01:18.530 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.530302 | orchestrator | 00:01:18.530 STDOUT terraform:  + image_id = (known after apply) 2025-04-13 00:01:18.530341 | orchestrator | 00:01:18.530 STDOUT terraform:  + image_name = (known after apply) 2025-04-13 00:01:18.530370 | orchestrator | 00:01:18.530 STDOUT terraform:  + key_pair = "testbed" 2025-04-13 00:01:18.530406 | orchestrator | 00:01:18.530 STDOUT terraform:  + name = "testbed-node-1" 2025-04-13 00:01:18.530433 | orchestrator | 00:01:18.530 STDOUT terraform:  + power_state = "active" 2025-04-13 00:01:18.530473 | orchestrator | 00:01:18.530 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.530513 | orchestrator | 00:01:18.530 STDOUT terraform:  + security_groups = (known after apply) 2025-04-13 00:01:18.530539 | orchestrator | 00:01:18.530 STDOUT terraform:  + stop_before_destroy = false 2025-04-13 00:01:18.530577 | orchestrator | 00:01:18.530 STDOUT terraform:  + updated = (known after apply) 2025-04-13 00:01:18.530632 | orchestrator | 00:01:18.530 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-13 00:01:18.530649 | orchestrator | 00:01:18.530 STDOUT terraform:  + block_device { 2025-04-13 00:01:18.530676 | orchestrator | 00:01:18.530 STDOUT terraform:  + boot_index = 0 2025-04-13 00:01:18.530706 | orchestrator | 00:01:18.530 STDOUT terraform:  + delete_on_termination = false 2025-04-13 00:01:18.530738 | orchestrator | 00:01:18.530 STDOUT terraform:  + destination_type = "volume" 2025-04-13 00:01:18.530771 | orchestrator | 00:01:18.530 STDOUT terraform:  + multiattach = false 2025-04-13 00:01:18.530802 | orchestrator | 00:01:18.530 STDOUT terraform:  + source_type = "volume" 2025-04-13 00:01:18.530846 | orchestrator | 00:01:18.530 STDOUT terraform:  + uuid = (known after apply) 2025-04-13 00:01:18.530854 | orchestrator | 00:01:18.530 STDOUT terraform:  } 2025-04-13 00:01:18.530872 | orchestrator | 00:01:18.530 STDOUT terraform:  + network { 2025-04-13 00:01:18.530896 | orchestrator | 00:01:18.530 STDOUT terraform:  + access_network = false 2025-04-13 00:01:18.530931 | orchestrator | 00:01:18.530 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-13 00:01:18.530964 | orchestrator | 00:01:18.530 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-13 00:01:18.530998 | orchestrator | 00:01:18.530 STDOUT terraform:  + mac = (known after apply) 2025-04-13 00:01:18.531033 | orchestrator | 00:01:18.530 STDOUT terraform:  + name = (known after apply) 2025-04-13 00:01:18.531066 | orchestrator | 00:01:18.531 STDOUT terraform:  + port = (known after apply) 2025-04-13 00:01:18.531117 | orchestrator | 00:01:18.531 STDOUT terraform:  + uuid = (known after apply) 2025-04-13 00:01:18.531125 | orchestrator | 00:01:18.531 STDOUT terraform:  } 2025-04-13 00:01:18.531142 | orchestrator | 00:01:18.531 STDOUT terraform:  } 2025-04-13 00:01:18.531191 | orchestrator | 00:01:18.531 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-04-13 00:01:18.531270 | orchestrator | 00:01:18.531 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-13 00:01:18.531309 | orchestrator | 00:01:18.531 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-13 00:01:18.531348 | orchestrator | 00:01:18.531 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-13 00:01:18.531391 | orchestrator | 00:01:18.531 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-13 00:01:18.531425 | orchestrator | 00:01:18.531 STDOUT terraform:  + all_tags = (known after apply) 2025-04-13 00:01:18.531451 | orchestrator | 00:01:18.531 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.531473 | orchestrator | 00:01:18.531 STDOUT terraform:  + config_drive = true 2025-04-13 00:01:18.531512 | orchestrator | 00:01:18.531 STDOUT terraform:  + created = (known after apply) 2025-04-13 00:01:18.531550 | orchestrator | 00:01:18.531 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-13 00:01:18.531583 | orchestrator | 00:01:18.531 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-13 00:01:18.531610 | orchestrator | 00:01:18.531 STDOUT terraform:  + force_delete = false 2025-04-13 00:01:18.531651 | orchestrator | 00:01:18.531 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.531688 | orchestrator | 00:01:18.531 STDOUT terraform:  + image_id = (known after apply) 2025-04-13 00:01:18.531727 | orchestrator | 00:01:18.531 STDOUT terraform:  + image_name = (known after apply) 2025-04-13 00:01:18.531754 | orchestrator | 00:01:18.531 STDOUT terraform:  + key_pair = "testbed" 2025-04-13 00:01:18.531790 | orchestrator | 00:01:18.531 STDOUT terraform:  + name = "testbed-node-2" 2025-04-13 00:01:18.531818 | orchestrator | 00:01:18.531 STDOUT terraform:  + power_state = "active" 2025-04-13 00:01:18.531857 | orchestrator | 00:01:18.531 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.531896 | orchestrator | 00:01:18.531 STDOUT terraform:  + security_groups = (known after apply) 2025-04-13 00:01:18.531923 | orchestrator | 00:01:18.531 STDOUT terraform:  + stop_before_destroy = false 2025-04-13 00:01:18.531962 | orchestrator | 00:01:18.531 STDOUT terraform:  + updated = (known after apply) 2025-04-13 00:01:18.532017 | orchestrator | 00:01:18.531 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-13 00:01:18.532035 | orchestrator | 00:01:18.532 STDOUT terraform:  + block_device { 2025-04-13 00:01:18.532062 | orchestrator | 00:01:18.532 STDOUT terraform:  + boot_index = 0 2025-04-13 00:01:18.532103 | orchestrator | 00:01:18.532 STDOUT terraform:  + delete_on_termination = false 2025-04-13 00:01:18.532136 | orchestrator | 00:01:18.532 STDOUT terraform:  + destination_type = "volume" 2025-04-13 00:01:18.532167 | orchestrator | 00:01:18.532 STDOUT terraform:  + multiattach = false 2025-04-13 00:01:18.532200 | orchestrator | 00:01:18.532 STDOUT terraform:  + source_type = "volume" 2025-04-13 00:01:18.532246 | orchestrator | 00:01:18.532 STDOUT terraform:  + uuid = (known after apply) 2025-04-13 00:01:18.532254 | orchestrator | 00:01:18.532 STDOUT terraform:  } 2025-04-13 00:01:18.532273 | orchestrator | 00:01:18.532 STDOUT terraform:  + network { 2025-04-13 00:01:18.532295 | orchestrator | 00:01:18.532 STDOUT terraform:  + access_network = false 2025-04-13 00:01:18.532330 | orchestrator | 00:01:18.532 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-13 00:01:18.532364 | orchestrator | 00:01:18.532 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-13 00:01:18.532399 | orchestrator | 00:01:18.532 STDOUT terraform:  + mac = (known after apply) 2025-04-13 00:01:18.532432 | orchestrator | 00:01:18.532 STDOUT terraform:  + name = (known after apply) 2025-04-13 00:01:18.532478 | orchestrator | 00:01:18.532 STDOUT terraform:  + port = (known after apply) 2025-04-13 00:01:18.532539 | orchestrator | 00:01:18.532 STDOUT terraform:  + uuid = (known after apply) 2025-04-13 00:01:18.532560 | orchestrator | 00:01:18.532 STDOUT terraform:  } 2025-04-13 00:01:18.532576 | orchestrator | 00:01:18.532 STDOUT terraform:  } 2025-04-13 00:01:18.532624 | orchestrator | 00:01:18.532 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-04-13 00:01:18.532669 | orchestrator | 00:01:18.532 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-13 00:01:18.532707 | orchestrator | 00:01:18.532 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-13 00:01:18.532746 | orchestrator | 00:01:18.532 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-13 00:01:18.532785 | orchestrator | 00:01:18.532 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-13 00:01:18.532824 | orchestrator | 00:01:18.532 STDOUT terraform:  + all_tags = (known after apply) 2025-04-13 00:01:18.532855 | orchestrator | 00:01:18.532 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.532873 | orchestrator | 00:01:18.532 STDOUT terraform:  + config_drive = true 2025-04-13 00:01:18.532913 | orchestrator | 00:01:18.532 STDOUT terraform:  + created = (known after apply) 2025-04-13 00:01:18.532952 | orchestrator | 00:01:18.532 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-13 00:01:18.532986 | orchestrator | 00:01:18.532 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-13 00:01:18.533009 | orchestrator | 00:01:18.532 STDOUT terraform:  + force_delete = false 2025-04-13 00:01:18.533048 | orchestrator | 00:01:18.533 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.533115 | orchestrator | 00:01:18.533 STDOUT terraform:  + image_id = (known after apply) 2025-04-13 00:01:18.533149 | orchestrator | 00:01:18.533 STDOUT terraform:  + image_name = (known after apply) 2025-04-13 00:01:18.533177 | orchestrator | 00:01:18.533 STDOUT terraform:  + key_pair = "testbed" 2025-04-13 00:01:18.533211 | orchestrator | 00:01:18.533 STDOUT terraform:  + name = "testbed-node-3" 2025-04-13 00:01:18.533238 | orchestrator | 00:01:18.533 STDOUT terraform:  + power_state = "active" 2025-04-13 00:01:18.533277 | orchestrator | 00:01:18.533 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.533316 | orchestrator | 00:01:18.533 STDOUT terraform:  + security_groups = (known after apply) 2025-04-13 00:01:18.533341 | orchestrator | 00:01:18.533 STDOUT terraform:  + stop_before_destroy = false 2025-04-13 00:01:18.533379 | orchestrator | 00:01:18.533 STDOUT terraform:  + updated = (known after apply) 2025-04-13 00:01:18.533435 | orchestrator | 00:01:18.533 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-13 00:01:18.533453 | orchestrator | 00:01:18.533 STDOUT terraform:  + block_device { 2025-04-13 00:01:18.533479 | orchestrator | 00:01:18.533 STDOUT terraform:  + boot_index = 0 2025-04-13 00:01:18.533511 | orchestrator | 00:01:18.533 STDOUT terraform:  + delete_on_termination = false 2025-04-13 00:01:18.533543 | orchestrator | 00:01:18.533 STDOUT terraform:  + destination_type = "volume" 2025-04-13 00:01:18.533573 | orchestrator | 00:01:18.533 STDOUT terraform:  + multiattach = false 2025-04-13 00:01:18.533605 | orchestrator | 00:01:18.533 STDOUT terraform:  + source_type = "volume" 2025-04-13 00:01:18.533647 | orchestrator | 00:01:18.533 STDOUT terraform:  + uuid = (known after apply) 2025-04-13 00:01:18.533655 | orchestrator | 00:01:18.533 STDOUT terraform:  } 2025-04-13 00:01:18.533673 | orchestrator | 00:01:18.533 STDOUT terraform:  + network { 2025-04-13 00:01:18.533693 | orchestrator | 00:01:18.533 STDOUT terraform:  + access_network = false 2025-04-13 00:01:18.533725 | orchestrator | 00:01:18.533 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-13 00:01:18.533757 | orchestrator | 00:01:18.533 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-13 00:01:18.533788 | orchestrator | 00:01:18.533 STDOUT terraform:  + mac = (known after apply) 2025-04-13 00:01:18.533820 | orchestrator | 00:01:18.533 STDOUT terraform:  + name = (known after apply) 2025-04-13 00:01:18.533851 | orchestrator | 00:01:18.533 STDOUT terraform:  + port = (known after apply) 2025-04-13 00:01:18.533883 | orchestrator | 00:01:18.533 STDOUT terraform:  + uuid = (known after apply) 2025-04-13 00:01:18.533891 | orchestrator | 00:01:18.533 STDOUT terraform:  } 2025-04-13 00:01:18.533907 | orchestrator | 00:01:18.533 STDOUT terraform:  } 2025-04-13 00:01:18.533951 | orchestrator | 00:01:18.533 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-04-13 00:01:18.533993 | orchestrator | 00:01:18.533 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-13 00:01:18.534044 | orchestrator | 00:01:18.533 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-13 00:01:18.534110 | orchestrator | 00:01:18.534 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-13 00:01:18.534119 | orchestrator | 00:01:18.534 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-13 00:01:18.534154 | orchestrator | 00:01:18.534 STDOUT terraform:  + all_tags = (known after apply) 2025-04-13 00:01:18.534178 | orchestrator | 00:01:18.534 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.534200 | orchestrator | 00:01:18.534 STDOUT terraform:  + config_drive = true 2025-04-13 00:01:18.534235 | orchestrator | 00:01:18.534 STDOUT terraform:  + created = (known after apply) 2025-04-13 00:01:18.534271 | orchestrator | 00:01:18.534 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-13 00:01:18.534300 | orchestrator | 00:01:18.534 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-13 00:01:18.534325 | orchestrator | 00:01:18.534 STDOUT terraform:  + force_delete = false 2025-04-13 00:01:18.534361 | orchestrator | 00:01:18.534 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.534396 | orchestrator | 00:01:18.534 STDOUT terraform:  + image_id = (known after apply) 2025-04-13 00:01:18.534433 | orchestrator | 00:01:18.534 STDOUT terraform:  + image_name = (known after apply) 2025-04-13 00:01:18.534458 | orchestrator | 00:01:18.534 STDOUT terraform:  + key_pair = "testbed" 2025-04-13 00:01:18.534488 | orchestrator | 00:01:18.534 STDOUT terraform:  + name = "testbed-node-4" 2025-04-13 00:01:18.534514 | orchestrator | 00:01:18.534 STDOUT terraform:  + power_state = "active" 2025-04-13 00:01:18.534551 | orchestrator | 00:01:18.534 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.534586 | orchestrator | 00:01:18.534 STDOUT terraform:  + security_groups = (known after apply) 2025-04-13 00:01:18.534611 | orchestrator | 00:01:18.534 STDOUT terraform:  + stop_before_destroy = false 2025-04-13 00:01:18.534647 | orchestrator | 00:01:18.534 STDOUT terraform:  + updated = (known after apply) 2025-04-13 00:01:18.534697 | orchestrator | 00:01:18.534 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-13 00:01:18.534707 | orchestrator | 00:01:18.534 STDOUT terraform:  + block_device { 2025-04-13 00:01:18.534735 | orchestrator | 00:01:18.534 STDOUT terraform:  + boot_index = 0 2025-04-13 00:01:18.534763 | orchestrator | 00:01:18.534 STDOUT terraform:  + delete_on_termination = false 2025-04-13 00:01:18.534792 | orchestrator | 00:01:18.534 STDOUT terraform:  + destination_type = "volume" 2025-04-13 00:01:18.534821 | orchestrator | 00:01:18.534 STDOUT terraform:  + multiattach = false 2025-04-13 00:01:18.534851 | orchestrator | 00:01:18.534 STDOUT terraform:  + source_type = "volume" 2025-04-13 00:01:18.534889 | orchestrator | 00:01:18.534 STDOUT terraform:  + uuid = (known after apply) 2025-04-13 00:01:18.534896 | orchestrator | 00:01:18.534 STDOUT terraform:  } 2025-04-13 00:01:18.534916 | orchestrator | 00:01:18.534 STDOUT terraform:  + network { 2025-04-13 00:01:18.534936 | orchestrator | 00:01:18.534 STDOUT terraform:  + access_network = false 2025-04-13 00:01:18.534971 | orchestrator | 00:01:18.534 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-13 00:01:18.534998 | orchestrator | 00:01:18.534 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-13 00:01:18.535029 | orchestrator | 00:01:18.534 STDOUT terraform:  + mac = (known after apply) 2025-04-13 00:01:18.535061 | orchestrator | 00:01:18.535 STDOUT terraform:  + name = (known after apply) 2025-04-13 00:01:18.535105 | orchestrator | 00:01:18.535 STDOUT terraform:  + port = (known after apply) 2025-04-13 00:01:18.535136 | orchestrator | 00:01:18.535 STDOUT terraform:  + uuid = (known after apply) 2025-04-13 00:01:18.535144 | orchestrator | 00:01:18.535 STDOUT terraform:  } 2025-04-13 00:01:18.535160 | orchestrator | 00:01:18.535 STDOUT terraform:  } 2025-04-13 00:01:18.535205 | orchestrator | 00:01:18.535 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-04-13 00:01:18.535247 | orchestrator | 00:01:18.535 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-13 00:01:18.535281 | orchestrator | 00:01:18.535 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-13 00:01:18.535316 | orchestrator | 00:01:18.535 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-13 00:01:18.535352 | orchestrator | 00:01:18.535 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-13 00:01:18.535388 | orchestrator | 00:01:18.535 STDOUT terraform:  + all_tags = (known after apply) 2025-04-13 00:01:18.535411 | orchestrator | 00:01:18.535 STDOUT terraform:  + availability_zone = "nova" 2025-04-13 00:01:18.535432 | orchestrator | 00:01:18.535 STDOUT terraform:  + config_drive = true 2025-04-13 00:01:18.535468 | orchestrator | 00:01:18.535 STDOUT terraform:  + created = (known after apply) 2025-04-13 00:01:18.535505 | orchestrator | 00:01:18.535 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-13 00:01:18.535538 | orchestrator | 00:01:18.535 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-13 00:01:18.535556 | orchestrator | 00:01:18.535 STDOUT terraform:  + force_delete = false 2025-04-13 00:01:18.535593 | orchestrator | 00:01:18.535 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.535629 | orchestrator | 00:01:18.535 STDOUT terraform:  + image_id = (known after apply) 2025-04-13 00:01:18.535663 | orchestrator | 00:01:18.535 STDOUT terraform:  + image_name = (known after apply) 2025-04-13 00:01:18.535688 | orchestrator | 00:01:18.535 STDOUT terraform:  + key_pair = "testbed" 2025-04-13 00:01:18.535720 | orchestrator | 00:01:18.535 STDOUT terraform:  + name = "testbed-node-5" 2025-04-13 00:01:18.535744 | orchestrator | 00:01:18.535 STDOUT terraform:  + power_state = "active" 2025-04-13 00:01:18.535780 | orchestrator | 00:01:18.535 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.535815 | orchestrator | 00:01:18.535 STDOUT terraform:  + security_groups = (known after apply) 2025-04-13 00:01:18.535838 | orchestrator | 00:01:18.535 STDOUT terraform:  + stop_before_destroy = false 2025-04-13 00:01:18.535874 | orchestrator | 00:01:18.535 STDOUT terraform:  + updated = (known after apply) 2025-04-13 00:01:18.535925 | orchestrator | 00:01:18.535 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-13 00:01:18.535942 | orchestrator | 00:01:18.535 STDOUT terraform:  + block_device { 2025-04-13 00:01:18.535967 | orchestrator | 00:01:18.535 STDOUT terraform:  + boot_index = 0 2025-04-13 00:01:18.535995 | orchestrator | 00:01:18.535 STDOUT terraform:  + delete_on_termination = false 2025-04-13 00:01:18.536024 | orchestrator | 00:01:18.535 STDOUT terraform:  + destination_type = "volume" 2025-04-13 00:01:18.536052 | orchestrator | 00:01:18.536 STDOUT terraform:  + multiattach = false 2025-04-13 00:01:18.536104 | orchestrator | 00:01:18.536 STDOUT terraform:  + source_type = "volume" 2025-04-13 00:01:18.536143 | orchestrator | 00:01:18.536 STDOUT terraform:  + uuid = (known after apply) 2025-04-13 00:01:18.536151 | orchestrator | 00:01:18.536 STDOUT terraform:  } 2025-04-13 00:01:18.536169 | orchestrator | 00:01:18.536 STDOUT terraform:  + network { 2025-04-13 00:01:18.536190 | orchestrator | 00:01:18.536 STDOUT terraform:  + access_network = false 2025-04-13 00:01:18.536221 | orchestrator | 00:01:18.536 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-13 00:01:18.536253 | orchestrator | 00:01:18.536 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-13 00:01:18.536285 | orchestrator | 00:01:18.536 STDOUT terraform:  + mac = (known after apply) 2025-04-13 00:01:18.536316 | orchestrator | 00:01:18.536 STDOUT terraform:  + name = (known after apply) 2025-04-13 00:01:18.536347 | orchestrator | 00:01:18.536 STDOUT terraform:  + port = (known after apply) 2025-04-13 00:01:18.536379 | orchestrator | 00:01:18.536 STDOUT terraform:  + uuid = (known after apply) 2025-04-13 00:01:18.536386 | orchestrator | 00:01:18.536 STDOUT terraform:  } 2025-04-13 00:01:18.536402 | orchestrator | 00:01:18.536 STDOUT terraform:  } 2025-04-13 00:01:18.536442 | orchestrator | 00:01:18.536 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-04-13 00:01:18.536474 | orchestrator | 00:01:18.536 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-04-13 00:01:18.536502 | orchestrator | 00:01:18.536 STDOUT terraform:  + fingerprint = (known after apply) 2025-04-13 00:01:18.536532 | orchestrator | 00:01:18.536 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.536635 | orchestrator | 00:01:18.536 STDOUT terraform:  + name = "testbed" 2025-04-13 00:01:18.536641 | orchestrator | 00:01:18.536 STDOUT terraform:  + private_key = (sensitive value) 2025-04-13 00:01:18.536647 | orchestrator | 00:01:18.536 STDOUT terraform:  + public_key = (known after apply) 2025-04-13 00:01:18.536652 | orchestrator | 00:01:18.536 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.536658 | orchestrator | 00:01:18.536 STDOUT terraform:  + user_id = (known after apply) 2025-04-13 00:01:18.536699 | orchestrator | 00:01:18.536 STDOUT terraform:  } 2025-04-13 00:01:18.536708 | orchestrator | 00:01:18.536 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-04-13 00:01:18.536749 | orchestrator | 00:01:18.536 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-13 00:01:18.536777 | orchestrator | 00:01:18.536 STDOUT terraform:  + device = (known after apply) 2025-04-13 00:01:18.536806 | orchestrator | 00:01:18.536 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.536834 | orchestrator | 00:01:18.536 STDOUT terraform:  + instance_id = (known after apply) 2025-04-13 00:01:18.536862 | orchestrator | 00:01:18.536 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.536890 | orchestrator | 00:01:18.536 STDOUT terraform:  + volume_id = (known after apply) 2025-04-13 00:01:18.536897 | orchestrator | 00:01:18.536 STDOUT terraform:  } 2025-04-13 00:01:18.536951 | orchestrator | 00:01:18.536 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-04-13 00:01:18.537000 | orchestrator | 00:01:18.536 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-13 00:01:18.537029 | orchestrator | 00:01:18.536 STDOUT terraform:  + device = (known after apply) 2025-04-13 00:01:18.537059 | orchestrator | 00:01:18.537 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.537115 | orchestrator | 00:01:18.537 STDOUT terraform:  + instance_id = (known after apply) 2025-04-13 00:01:18.537124 | orchestrator | 00:01:18.537 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.537155 | orchestrator | 00:01:18.537 STDOUT terraform:  + volume_id = (known after apply) 2025-04-13 00:01:18.537163 | orchestrator | 00:01:18.537 STDOUT terraform:  } 2025-04-13 00:01:18.537209 | orchestrator | 00:01:18.537 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-04-13 00:01:18.537257 | orchestrator | 00:01:18.537 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-13 00:01:18.537285 | orchestrator | 00:01:18.537 STDOUT terraform:  + device = (known after apply) 2025-04-13 00:01:18.537314 | orchestrator | 00:01:18.537 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.537342 | orchestrator | 00:01:18.537 STDOUT terraform:  + instance_id = (known after apply) 2025-04-13 00:01:18.537370 | orchestrator | 00:01:18.537 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.537399 | orchestrator | 00:01:18.537 STDOUT terraform:  + volume_id = (known after apply) 2025-04-13 00:01:18.537413 | orchestrator | 00:01:18.537 STDOUT terraform:  } 2025-04-13 00:01:18.537460 | orchestrator | 00:01:18.537 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-04-13 00:01:18.537509 | orchestrator | 00:01:18.537 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-13 00:01:18.537539 | orchestrator | 00:01:18.537 STDOUT terraform:  + device = (known after apply) 2025-04-13 00:01:18.537580 | orchestrator | 00:01:18.537 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.537623 | orchestrator | 00:01:18.537 STDOUT terraform:  + instance_id = (known after apply) 2025-04-13 00:01:18.537660 | orchestrator | 00:01:18.537 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.537690 | orchestrator | 00:01:18.537 STDOUT terraform:  + volume_id = (known after apply) 2025-04-13 00:01:18.537697 | orchestrator | 00:01:18.537 STDOUT terraform:  } 2025-04-13 00:01:18.537757 | orchestrator | 00:01:18.537 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-04-13 00:01:18.537799 | orchestrator | 00:01:18.537 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-13 00:01:18.537828 | orchestrator | 00:01:18.537 STDOUT terraform:  + device = (known after apply) 2025-04-13 00:01:18.537857 | orchestrator | 00:01:18.537 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.537884 | orchestrator | 00:01:18.537 STDOUT terraform:  + instance_id = (known after apply) 2025-04-13 00:01:18.537913 | orchestrator | 00:01:18.537 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.537943 | orchestrator | 00:01:18.537 STDOUT terraform:  + volume_id = (known after apply) 2025-04-13 00:01:18.537951 | orchestrator | 00:01:18.537 STDOUT terraform:  } 2025-04-13 00:01:18.538003 | orchestrator | 00:01:18.537 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-04-13 00:01:18.538068 | orchestrator | 00:01:18.537 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-13 00:01:18.538121 | orchestrator | 00:01:18.538 STDOUT terraform:  + device = (known after apply) 2025-04-13 00:01:18.538152 | orchestrator | 00:01:18.538 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.538227 | orchestrator | 00:01:18.538 STDOUT terraform:  + instance_id = (known after apply) 2025-04-13 00:01:18.538239 | orchestrator | 00:01:18.538 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.538291 | orchestrator | 00:01:18.538 STDOUT terraform:  + volume_id = (known after apply) 2025-04-13 00:01:18.538298 | orchestrator | 00:01:18.538 STDOUT terraform:  } 2025-04-13 00:01:18.538305 | orchestrator | 00:01:18.538 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-04-13 00:01:18.538342 | orchestrator | 00:01:18.538 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-13 00:01:18.538370 | orchestrator | 00:01:18.538 STDOUT terraform:  + device = (known after apply) 2025-04-13 00:01:18.538400 | orchestrator | 00:01:18.538 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.538442 | orchestrator | 00:01:18.538 STDOUT terraform:  + instance_id = (known after apply) 2025-04-13 00:01:18.538485 | orchestrator | 00:01:18.538 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.538525 | orchestrator | 00:01:18.538 STDOUT terraform:  + volume_id = (known after apply) 2025-04-13 00:01:18.538540 | orchestrator | 00:01:18.538 STDOUT terraform:  } 2025-04-13 00:01:18.538591 | orchestrator | 00:01:18.538 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-04-13 00:01:18.538671 | orchestrator | 00:01:18.538 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-13 00:01:18.538701 | orchestrator | 00:01:18.538 STDOUT terraform:  + device = (known after apply) 2025-04-13 00:01:18.538732 | orchestrator | 00:01:18.538 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.538760 | orchestrator | 00:01:18.538 STDOUT terraform:  + instance_id = (known after apply) 2025-04-13 00:01:18.538794 | orchestrator | 00:01:18.538 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.538829 | orchestrator | 00:01:18.538 STDOUT terraform:  + volume_id = (known after apply) 2025-04-13 00:01:18.538851 | orchestrator | 00:01:18.538 STDOUT terraform:  } 2025-04-13 00:01:18.538921 | orchestrator | 00:01:18.538 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-04-13 00:01:18.538970 | orchestrator | 00:01:18.538 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-13 00:01:18.539013 | orchestrator | 00:01:18.538 STDOUT terraform:  + device = (known after apply) 2025-04-13 00:01:18.539044 | orchestrator | 00:01:18.539 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.539088 | orchestrator | 00:01:18.539 STDOUT terraform:  + instance_id = (known after apply) 2025-04-13 00:01:18.539115 | orchestrator | 00:01:18.539 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.539144 | orchestrator | 00:01:18.539 STDOUT terraform:  + volume_id = (known after apply) 2025-04-13 00:01:18.539151 | orchestrator | 00:01:18.539 STDOUT terraform:  } 2025-04-13 00:01:18.539202 | orchestrator | 00:01:18.539 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[9] will be created 2025-04-13 00:01:18.539251 | orchestrator | 00:01:18.539 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-13 00:01:18.539282 | orchestrator | 00:01:18.539 STDOUT terraform:  + device = (known after apply) 2025-04-13 00:01:18.539310 | orchestrator | 00:01:18.539 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.539338 | orchestrator | 00:01:18.539 STDOUT terraform:  + instance_id = (known after apply) 2025-04-13 00:01:18.539368 | orchestrator | 00:01:18.539 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.539396 | orchestrator | 00:01:18.539 STDOUT terraform:  + volume_id = (known after apply) 2025-04-13 00:01:18.539403 | orchestrator | 00:01:18.539 STDOUT terraform:  } 2025-04-13 00:01:18.539461 | orchestrator | 00:01:18.539 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[10] will be created 2025-04-13 00:01:18.539509 | orchestrator | 00:01:18.539 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-13 00:01:18.539551 | orchestrator | 00:01:18.539 STDOUT terraform:  + device = (known after apply) 2025-04-13 00:01:18.539595 | orchestrator | 00:01:18.539 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.539643 | orchestrator | 00:01:18.539 STDOUT terraform:  + instance_id = (known after apply) 2025-04-13 00:01:18.539689 | orchestrator | 00:01:18.539 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.539721 | orchestrator | 00:01:18.539 STDOUT terraform:  + volume_id = (known after apply) 2025-04-13 00:01:18.539735 | orchestrator | 00:01:18.539 STDOUT terraform:  } 2025-04-13 00:01:18.539785 | orchestrator | 00:01:18.539 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[11] will be created 2025-04-13 00:01:18.539833 | orchestrator | 00:01:18.539 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-13 00:01:18.539861 | orchestrator | 00:01:18.539 STDOUT terraform:  + device = (known after apply) 2025-04-13 00:01:18.539891 | orchestrator | 00:01:18.539 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.539919 | orchestrator | 00:01:18.539 STDOUT terraform:  + instance_id = (known after apply) 2025-04-13 00:01:18.539950 | orchestrator | 00:01:18.539 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.539979 | orchestrator | 00:01:18.539 STDOUT terraform:  + volume_id = (known after apply) 2025-04-13 00:01:18.539995 | orchestrator | 00:01:18.539 STDOUT terraform:  } 2025-04-13 00:01:18.540045 | orchestrator | 00:01:18.539 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[12] will be created 2025-04-13 00:01:18.540114 | orchestrator | 00:01:18.540 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-13 00:01:18.540134 | orchestrator | 00:01:18.540 STDOUT terraform:  + device = (known after apply) 2025-04-13 00:01:18.540163 | orchestrator | 00:01:18.540 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.540192 | orchestrator | 00:01:18.540 STDOUT terraform:  + instance_id = (known after apply) 2025-04-13 00:01:18.540220 | orchestrator | 00:01:18.540 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.540249 | orchestrator | 00:01:18.540 STDOUT terraform:  + volume_id = (known after apply) 2025-04-13 00:01:18.540264 | orchestrator | 00:01:18.540 STDOUT terraform:  } 2025-04-13 00:01:18.540314 | orchestrator | 00:01:18.540 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[13] will be created 2025-04-13 00:01:18.540362 | orchestrator | 00:01:18.540 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-13 00:01:18.540390 | orchestrator | 00:01:18.540 STDOUT terraform:  + device = (known after apply) 2025-04-13 00:01:18.540423 | orchestrator | 00:01:18.540 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.540448 | orchestrator | 00:01:18.540 STDOUT terraform:  + instance_id = (known after apply) 2025-04-13 00:01:18.540477 | orchestrator | 00:01:18.540 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.540504 | orchestrator | 00:01:18.540 STDOUT terraform:  + volume_id = (known after apply) 2025-04-13 00:01:18.540517 | orchestrator | 00:01:18.540 STDOUT terraform:  } 2025-04-13 00:01:18.540564 | orchestrator | 00:01:18.540 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[14] will be created 2025-04-13 00:01:18.540622 | orchestrator | 00:01:18.540 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-13 00:01:18.540665 | orchestrator | 00:01:18.540 STDOUT terraform:  + device = (known after apply) 2025-04-13 00:01:18.540698 | orchestrator | 00:01:18.540 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.540739 | orchestrator | 00:01:18.540 STDOUT terraform:  + instance_id = (known after apply) 2025-04-13 00:01:18.540774 | orchestrator | 00:01:18.540 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.540800 | orchestrator | 00:01:18.540 STDOUT terraform:  + volume_id = (known after apply) 2025-04-13 00:01:18.540821 | orchestrator | 00:01:18.540 STDOUT terraform:  } 2025-04-13 00:01:18.540866 | orchestrator | 00:01:18.540 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[15] will be created 2025-04-13 00:01:18.540917 | orchestrator | 00:01:18.540 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-13 00:01:18.540965 | orchestrator | 00:01:18.540 STDOUT terraform:  + device = (known after apply) 2025-04-13 00:01:18.540995 | orchestrator | 00:01:18.540 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.541030 | orchestrator | 00:01:18.540 STDOUT terraform:  + instance_id = (known after apply) 2025-04-13 00:01:18.541093 | orchestrator | 00:01:18.541 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.541195 | orchestrator | 00:01:18.541 STDOUT terraform:  + volume_id = (known after apply) 2025-04-13 00:01:18.541215 | orchestrator | 00:01:18.541 STDOUT terraform:  } 2025-04-13 00:01:18.541268 | orchestrator | 00:01:18.541 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[16] will be created 2025-04-13 00:01:18.541317 | orchestrator | 00:01:18.541 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-13 00:01:18.541347 | orchestrator | 00:01:18.541 STDOUT terraform:  + device = (known after apply) 2025-04-13 00:01:18.541378 | orchestrator | 00:01:18.541 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.541406 | orchestrator | 00:01:18.541 STDOUT terraform:  + instance_id = (known after apply) 2025-04-13 00:01:18.541437 | orchestrator | 00:01:18.541 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.541466 | orchestrator | 00:01:18.541 STDOUT terraform:  + volume_id = (known after apply) 2025-04-13 00:01:18.542452 | orchestrator | 00:01:18.542 STDOUT terraform:  } 2025-04-13 00:01:18.542477 | orchestrator | 00:01:18.542 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[17] will be created 2025-04-13 00:01:18.542543 | orchestrator | 00:01:18.542 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-13 00:01:18.542565 | orchestrator | 00:01:18.542 STDOUT terraform:  + device = (known after apply) 2025-04-13 00:01:18.542603 | orchestrator | 00:01:18.542 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.542651 | orchestrator | 00:01:18.542 STDOUT terraform:  + instance_id = (known after apply) 2025-04-13 00:01:18.542686 | orchestrator | 00:01:18.542 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.542731 | orchestrator | 00:01:18.542 STDOUT terraform:  + volume_id = (known after apply) 2025-04-13 00:01:18.542740 | orchestrator | 00:01:18.542 STDOUT terraform:  } 2025-04-13 00:01:18.542828 | orchestrator | 00:01:18.542 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-04-13 00:01:18.542916 | orchestrator | 00:01:18.542 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-04-13 00:01:18.542959 | orchestrator | 00:01:18.542 STDOUT terraform:  + fixed_ip = (known after apply) 2025-04-13 00:01:18.543003 | orchestrator | 00:01:18.542 STDOUT terraform:  + floating_ip = (known after apply) 2025-04-13 00:01:18.543048 | orchestrator | 00:01:18.542 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.543104 | orchestrator | 00:01:18.543 STDOUT terraform:  + port_id = (known after apply) 2025-04-13 00:01:18.543153 | orchestrator | 00:01:18.543 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.543164 | orchestrator | 00:01:18.543 STDOUT terraform:  } 2025-04-13 00:01:18.543236 | orchestrator | 00:01:18.543 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-04-13 00:01:18.543301 | orchestrator | 00:01:18.543 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-04-13 00:01:18.543328 | orchestrator | 00:01:18.543 STDOUT terraform:  + address = (known after apply) 2025-04-13 00:01:18.543406 | orchestrator | 00:01:18.543 STDOUT terraform:  + all_tags = (known after apply) 2025-04-13 00:01:18.543432 | orchestrator | 00:01:18.543 STDOUT terraform:  + dns_domain = (known after apply) 2025-04-13 00:01:18.543458 | orchestrator | 00:01:18.543 STDOUT terraform:  + dns_name = (known after apply) 2025-04-13 00:01:18.543483 | orchestrator | 00:01:18.543 STDOUT terraform:  + fixed_ip = (known after apply) 2025-04-13 00:01:18.543510 | orchestrator | 00:01:18.543 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.543532 | orchestrator | 00:01:18.543 STDOUT terraform:  + pool = "public" 2025-04-13 00:01:18.543558 | orchestrator | 00:01:18.543 STDOUT terraform:  + port_id = (known after apply) 2025-04-13 00:01:18.543583 | orchestrator | 00:01:18.543 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.543608 | orchestrator | 00:01:18.543 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-13 00:01:18.543637 | orchestrator | 00:01:18.543 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-13 00:01:18.543660 | orchestrator | 00:01:18.543 STDOUT terraform:  } 2025-04-13 00:01:18.543703 | orchestrator | 00:01:18.543 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-04-13 00:01:18.543759 | orchestrator | 00:01:18.543 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-04-13 00:01:18.543795 | orchestrator | 00:01:18.543 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-13 00:01:18.543833 | orchestrator | 00:01:18.543 STDOUT terraform:  + all_tags = (known after apply) 2025-04-13 00:01:18.543852 | orchestrator | 00:01:18.543 STDOUT terraform:  + availability_zone_hints = [ 2025-04-13 00:01:18.543863 | orchestrator | 00:01:18.543 STDOUT terraform:  + "nova", 2025-04-13 00:01:18.543873 | orchestrator | 00:01:18.543 STDOUT terraform:  ] 2025-04-13 00:01:18.543910 | orchestrator | 00:01:18.543 STDOUT terraform:  + dns_domain = (known after apply) 2025-04-13 00:01:18.543946 | orchestrator | 00:01:18.543 STDOUT terraform:  + external = (known after apply) 2025-04-13 00:01:18.543996 | orchestrator | 00:01:18.543 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.544034 | orchestrator | 00:01:18.543 STDOUT terraform:  + mtu = (known after apply) 2025-04-13 00:01:18.544104 | orchestrator | 00:01:18.544 STDOUT terraform:  + name = "net-testbed-management" 2025-04-13 00:01:18.544113 | orchestrator | 00:01:18.544 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-13 00:01:18.544151 | orchestrator | 00:01:18.544 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-13 00:01:18.544189 | orchestrator | 00:01:18.544 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.544225 | orchestrator | 00:01:18.544 STDOUT terraform:  + shared = (known after apply) 2025-04-13 00:01:18.544262 | orchestrator | 00:01:18.544 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-13 00:01:18.544299 | orchestrator | 00:01:18.544 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-04-13 00:01:18.544322 | orchestrator | 00:01:18.544 STDOUT terraform:  + segments (known after apply) 2025-04-13 00:01:18.544337 | orchestrator | 00:01:18.544 STDOUT terraform:  } 2025-04-13 00:01:18.544383 | orchestrator | 00:01:18.544 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-04-13 00:01:18.544431 | orchestrator | 00:01:18.544 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-04-13 00:01:18.544473 | orchestrator | 00:01:18.544 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-13 00:01:18.544505 | orchestrator | 00:01:18.544 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-13 00:01:18.544540 | orchestrator | 00:01:18.544 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-13 00:01:18.544576 | orchestrator | 00:01:18.544 STDOUT terraform:  + all_tags = (known after apply) 2025-04-13 00:01:18.544620 | orchestrator | 00:01:18.544 STDOUT terraform:  + device_id = (known after apply) 2025-04-13 00:01:18.544658 | orchestrator | 00:01:18.544 STDOUT terraform:  + device_owner = (known after apply) 2025-04-13 00:01:18.544693 | orchestrator | 00:01:18.544 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-13 00:01:18.544730 | orchestrator | 00:01:18.544 STDOUT terraform:  + dns_name = (known after apply) 2025-04-13 00:01:18.544767 | orchestrator | 00:01:18.544 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.544803 | orchestrator | 00:01:18.544 STDOUT terraform:  + mac_address = (known after apply) 2025-04-13 00:01:18.544844 | orchestrator | 00:01:18.544 STDOUT terraform:  + network_id = (known after apply) 2025-04-13 00:01:18.544880 | orchestrator | 00:01:18.544 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-13 00:01:18.544911 | orchestrator | 00:01:18.544 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-13 00:01:18.544949 | orchestrator | 00:01:18.544 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.544984 | orchestrator | 00:01:18.544 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-13 00:01:18.545020 | orchestrator | 00:01:18.544 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-13 00:01:18.545040 | orchestrator | 00:01:18.545 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.545120 | orchestrator | 00:01:18.545 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-13 00:01:18.545134 | orchestrator | 00:01:18.545 STDOUT terraform:  } 2025-04-13 00:01:18.545141 | orchestrator | 00:01:18.545 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.545164 | orchestrator | 00:01:18.545 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-13 00:01:18.545171 | orchestrator | 00:01:18.545 STDOUT terraform:  } 2025-04-13 00:01:18.545200 | orchestrator | 00:01:18.545 STDOUT terraform:  + binding (known after apply) 2025-04-13 00:01:18.545207 | orchestrator | 00:01:18.545 STDOUT terraform:  + fixed_ip { 2025-04-13 00:01:18.545235 | orchestrator | 00:01:18.545 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-04-13 00:01:18.545264 | orchestrator | 00:01:18.545 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-13 00:01:18.545271 | orchestrator | 00:01:18.545 STDOUT terraform:  } 2025-04-13 00:01:18.545278 | orchestrator | 00:01:18.545 STDOUT terraform:  } 2025-04-13 00:01:18.545344 | orchestrator | 00:01:18.545 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-04-13 00:01:18.545386 | orchestrator | 00:01:18.545 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-13 00:01:18.545409 | orchestrator | 00:01:18.545 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-13 00:01:18.545445 | orchestrator | 00:01:18.545 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-13 00:01:18.545480 | orchestrator | 00:01:18.545 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-13 00:01:18.545517 | orchestrator | 00:01:18.545 STDOUT terraform:  + all_tags = (known after apply) 2025-04-13 00:01:18.545553 | orchestrator | 00:01:18.545 STDOUT terraform:  + device_id = (known after apply) 2025-04-13 00:01:18.545589 | orchestrator | 00:01:18.545 STDOUT terraform:  + device_owner = (known after apply) 2025-04-13 00:01:18.545625 | orchestrator | 00:01:18.545 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-13 00:01:18.545661 | orchestrator | 00:01:18.545 STDOUT terraform:  + dns_name = (known after apply) 2025-04-13 00:01:18.545698 | orchestrator | 00:01:18.545 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.545734 | orchestrator | 00:01:18.545 STDOUT terraform:  + mac_address = (known after apply) 2025-04-13 00:01:18.545771 | orchestrator | 00:01:18.545 STDOUT terraform:  + network_id = (known after apply) 2025-04-13 00:01:18.545809 | orchestrator | 00:01:18.545 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-13 00:01:18.545842 | orchestrator | 00:01:18.545 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-13 00:01:18.545882 | orchestrator | 00:01:18.545 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.545915 | orchestrator | 00:01:18.545 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-13 00:01:18.545951 | orchestrator | 00:01:18.545 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-13 00:01:18.545973 | orchestrator | 00:01:18.545 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.546001 | orchestrator | 00:01:18.545 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-13 00:01:18.546008 | orchestrator | 00:01:18.545 STDOUT terraform:  } 2025-04-13 00:01:18.546055 | orchestrator | 00:01:18.546 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.546109 | orchestrator | 00:01:18.546 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-13 00:01:18.546138 | orchestrator | 00:01:18.546 STDOUT terraform:  } 2025-04-13 00:01:18.546143 | orchestrator | 00:01:18.546 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.546150 | orchestrator | 00:01:18.546 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-13 00:01:18.546167 | orchestrator | 00:01:18.546 STDOUT terraform:  } 2025-04-13 00:01:18.546174 | orchestrator | 00:01:18.546 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.546198 | orchestrator | 00:01:18.546 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-13 00:01:18.546205 | orchestrator | 00:01:18.546 STDOUT terraform:  } 2025-04-13 00:01:18.546233 | orchestrator | 00:01:18.546 STDOUT terraform:  + binding (known after apply) 2025-04-13 00:01:18.546240 | orchestrator | 00:01:18.546 STDOUT terraform:  + fixed_ip { 2025-04-13 00:01:18.546270 | orchestrator | 00:01:18.546 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-04-13 00:01:18.546300 | orchestrator | 00:01:18.546 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-13 00:01:18.546308 | orchestrator | 00:01:18.546 STDOUT terraform:  } 2025-04-13 00:01:18.546315 | orchestrator | 00:01:18.546 STDOUT terraform:  } 2025-04-13 00:01:18.546364 | orchestrator | 00:01:18.546 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-04-13 00:01:18.546409 | orchestrator | 00:01:18.546 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-13 00:01:18.546444 | orchestrator | 00:01:18.546 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-13 00:01:18.546480 | orchestrator | 00:01:18.546 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-13 00:01:18.546514 | orchestrator | 00:01:18.546 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-13 00:01:18.546554 | orchestrator | 00:01:18.546 STDOUT terraform:  + all_tags = (known after apply) 2025-04-13 00:01:18.546587 | orchestrator | 00:01:18.546 STDOUT terraform:  + device_id = (known after apply) 2025-04-13 00:01:18.546623 | orchestrator | 00:01:18.546 STDOUT terraform:  + device_owner = (known after apply) 2025-04-13 00:01:18.546658 | orchestrator | 00:01:18.546 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-13 00:01:18.546694 | orchestrator | 00:01:18.546 STDOUT terraform:  + dns_name = (known after apply) 2025-04-13 00:01:18.546731 | orchestrator | 00:01:18.546 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.546768 | orchestrator | 00:01:18.546 STDOUT terraform:  + mac_address = (known after apply) 2025-04-13 00:01:18.546803 | orchestrator | 00:01:18.546 STDOUT terraform:  + network_id = (known after apply) 2025-04-13 00:01:18.546838 | orchestrator | 00:01:18.546 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-13 00:01:18.546874 | orchestrator | 00:01:18.546 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-13 00:01:18.546910 | orchestrator | 00:01:18.546 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.546947 | orchestrator | 00:01:18.546 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-13 00:01:18.546983 | orchestrator | 00:01:18.546 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-13 00:01:18.547002 | orchestrator | 00:01:18.546 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.547031 | orchestrator | 00:01:18.546 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-13 00:01:18.547040 | orchestrator | 00:01:18.547 STDOUT terraform:  } 2025-04-13 00:01:18.547060 | orchestrator | 00:01:18.547 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.547099 | orchestrator | 00:01:18.547 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-13 00:01:18.547106 | orchestrator | 00:01:18.547 STDOUT terraform:  } 2025-04-13 00:01:18.547127 | orchestrator | 00:01:18.547 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.547156 | orchestrator | 00:01:18.547 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-13 00:01:18.547163 | orchestrator | 00:01:18.547 STDOUT terraform:  } 2025-04-13 00:01:18.547186 | orchestrator | 00:01:18.547 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.547215 | orchestrator | 00:01:18.547 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-13 00:01:18.547222 | orchestrator | 00:01:18.547 STDOUT terraform:  } 2025-04-13 00:01:18.547249 | orchestrator | 00:01:18.547 STDOUT terraform:  + binding (known after apply) 2025-04-13 00:01:18.547257 | orchestrator | 00:01:18.547 STDOUT terraform:  + fixed_ip { 2025-04-13 00:01:18.547285 | orchestrator | 00:01:18.547 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-04-13 00:01:18.547315 | orchestrator | 00:01:18.547 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-13 00:01:18.547322 | orchestrator | 00:01:18.547 STDOUT terraform:  } 2025-04-13 00:01:18.547338 | orchestrator | 00:01:18.547 STDOUT terraform:  } 2025-04-13 00:01:18.547383 | orchestrator | 00:01:18.547 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-04-13 00:01:18.547429 | orchestrator | 00:01:18.547 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-13 00:01:18.547465 | orchestrator | 00:01:18.547 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-13 00:01:18.547501 | orchestrator | 00:01:18.547 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-13 00:01:18.547544 | orchestrator | 00:01:18.547 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-13 00:01:18.547580 | orchestrator | 00:01:18.547 STDOUT terraform:  + all_tags = (known after apply) 2025-04-13 00:01:18.547616 | orchestrator | 00:01:18.547 STDOUT terraform:  + device_id = (known after apply) 2025-04-13 00:01:18.547651 | orchestrator | 00:01:18.547 STDOUT terraform:  + device_owner = (known after apply) 2025-04-13 00:01:18.547687 | orchestrator | 00:01:18.547 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-13 00:01:18.547723 | orchestrator | 00:01:18.547 STDOUT terraform:  + dns_name = (known after apply) 2025-04-13 00:01:18.547760 | orchestrator | 00:01:18.547 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.547794 | orchestrator | 00:01:18.547 STDOUT terraform:  + mac_address = (known after apply) 2025-04-13 00:01:18.547830 | orchestrator | 00:01:18.547 STDOUT terraform:  + network_id = (known after apply) 2025-04-13 00:01:18.547867 | orchestrator | 00:01:18.547 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-13 00:01:18.547902 | orchestrator | 00:01:18.547 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-13 00:01:18.547941 | orchestrator | 00:01:18.547 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.547975 | orchestrator | 00:01:18.547 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-13 00:01:18.548011 | orchestrator | 00:01:18.547 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-13 00:01:18.548030 | orchestrator | 00:01:18.548 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.548059 | orchestrator | 00:01:18.548 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-13 00:01:18.548066 | orchestrator | 00:01:18.548 STDOUT terraform:  } 2025-04-13 00:01:18.548099 | orchestrator | 00:01:18.548 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.548129 | orchestrator | 00:01:18.548 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-13 00:01:18.548136 | orchestrator | 00:01:18.548 STDOUT terraform:  } 2025-04-13 00:01:18.548157 | orchestrator | 00:01:18.548 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.548185 | orchestrator | 00:01:18.548 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-13 00:01:18.548193 | orchestrator | 00:01:18.548 STDOUT terraform:  } 2025-04-13 00:01:18.548214 | orchestrator | 00:01:18.548 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.548242 | orchestrator | 00:01:18.548 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-13 00:01:18.548250 | orchestrator | 00:01:18.548 STDOUT terraform:  } 2025-04-13 00:01:18.548276 | orchestrator | 00:01:18.548 STDOUT terraform:  + binding (known after apply) 2025-04-13 00:01:18.548283 | orchestrator | 00:01:18.548 STDOUT terraform:  + fixed_ip { 2025-04-13 00:01:18.548314 | orchestrator | 00:01:18.548 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-04-13 00:01:18.548341 | orchestrator | 00:01:18.548 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-13 00:01:18.548348 | orchestrator | 00:01:18.548 STDOUT terraform:  } 2025-04-13 00:01:18.548365 | orchestrator | 00:01:18.548 STDOUT terraform:  } 2025-04-13 00:01:18.548412 | orchestrator | 00:01:18.548 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-04-13 00:01:18.548457 | orchestrator | 00:01:18.548 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-13 00:01:18.548493 | orchestrator | 00:01:18.548 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-13 00:01:18.548530 | orchestrator | 00:01:18.548 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-13 00:01:18.548563 | orchestrator | 00:01:18.548 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-13 00:01:18.548599 | orchestrator | 00:01:18.548 STDOUT terraform:  + all_tags = (known after apply) 2025-04-13 00:01:18.548635 | orchestrator | 00:01:18.548 STDOUT terraform:  + device_id = (known after apply) 2025-04-13 00:01:18.548698 | orchestrator | 00:01:18.548 STDOUT terraform:  + device_owner = (known after apply) 2025-04-13 00:01:18.548740 | orchestrator | 00:01:18.548 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-13 00:01:18.548777 | orchestrator | 00:01:18.548 STDOUT terraform:  + dns_name = (known after apply) 2025-04-13 00:01:18.548814 | orchestrator | 00:01:18.548 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.548850 | orchestrator | 00:01:18.548 STDOUT terraform:  + mac_address = (known after apply) 2025-04-13 00:01:18.548886 | orchestrator | 00:01:18.548 STDOUT terraform:  + network_id = (known after apply) 2025-04-13 00:01:18.548921 | orchestrator | 00:01:18.548 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-13 00:01:18.548958 | orchestrator | 00:01:18.548 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-13 00:01:18.548993 | orchestrator | 00:01:18.548 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.549028 | orchestrator | 00:01:18.548 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-13 00:01:18.549065 | orchestrator | 00:01:18.549 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-13 00:01:18.549112 | orchestrator | 00:01:18.549 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.549131 | orchestrator | 00:01:18.549 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-13 00:01:18.549138 | orchestrator | 00:01:18.549 STDOUT terraform:  } 2025-04-13 00:01:18.549158 | orchestrator | 00:01:18.549 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.549192 | orchestrator | 00:01:18.549 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-13 00:01:18.549200 | orchestrator | 00:01:18.549 STDOUT terraform:  } 2025-04-13 00:01:18.549232 | orchestrator | 00:01:18.549 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.549239 | orchestrator | 00:01:18.549 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-13 00:01:18.549246 | orchestrator | 00:01:18.549 STDOUT terraform:  } 2025-04-13 00:01:18.549269 | orchestrator | 00:01:18.549 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.549298 | orchestrator | 00:01:18.549 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-13 00:01:18.549305 | orchestrator | 00:01:18.549 STDOUT terraform:  } 2025-04-13 00:01:18.549331 | orchestrator | 00:01:18.549 STDOUT terraform:  + binding (known after apply) 2025-04-13 00:01:18.549338 | orchestrator | 00:01:18.549 STDOUT terraform:  + fixed_ip { 2025-04-13 00:01:18.549365 | orchestrator | 00:01:18.549 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-04-13 00:01:18.549395 | orchestrator | 00:01:18.549 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-13 00:01:18.549402 | orchestrator | 00:01:18.549 STDOUT terraform:  } 2025-04-13 00:01:18.549417 | orchestrator | 00:01:18.549 STDOUT terraform:  } 2025-04-13 00:01:18.549464 | orchestrator | 00:01:18.549 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-04-13 00:01:18.549509 | orchestrator | 00:01:18.549 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-13 00:01:18.549544 | orchestrator | 00:01:18.549 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-13 00:01:18.549583 | orchestrator | 00:01:18.549 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-13 00:01:18.549618 | orchestrator | 00:01:18.549 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-13 00:01:18.549655 | orchestrator | 00:01:18.549 STDOUT terraform:  + all_tags = (known after apply) 2025-04-13 00:01:18.549691 | orchestrator | 00:01:18.549 STDOUT terraform:  + device_id = (known after apply) 2025-04-13 00:01:18.549727 | orchestrator | 00:01:18.549 STDOUT terraform:  + device_owner = (known after apply) 2025-04-13 00:01:18.549763 | orchestrator | 00:01:18.549 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-13 00:01:18.549806 | orchestrator | 00:01:18.549 STDOUT terraform:  + dns_name = (known after apply) 2025-04-13 00:01:18.549835 | orchestrator | 00:01:18.549 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.549871 | orchestrator | 00:01:18.549 STDOUT terraform:  + mac_address = (known after apply) 2025-04-13 00:01:18.549906 | orchestrator | 00:01:18.549 STDOUT terraform:  + network_id = (known after apply) 2025-04-13 00:01:18.549941 | orchestrator | 00:01:18.549 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-13 00:01:18.549980 | orchestrator | 00:01:18.549 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-13 00:01:18.550053 | orchestrator | 00:01:18.549 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.550062 | orchestrator | 00:01:18.550 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-13 00:01:18.550111 | orchestrator | 00:01:18.550 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-13 00:01:18.550130 | orchestrator | 00:01:18.550 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.550159 | orchestrator | 00:01:18.550 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-13 00:01:18.550170 | orchestrator | 00:01:18.550 STDOUT terraform:  } 2025-04-13 00:01:18.550186 | orchestrator | 00:01:18.550 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.550216 | orchestrator | 00:01:18.550 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-13 00:01:18.550226 | orchestrator | 00:01:18.550 STDOUT terraform:  } 2025-04-13 00:01:18.550249 | orchestrator | 00:01:18.550 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.550271 | orchestrator | 00:01:18.550 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-13 00:01:18.550278 | orchestrator | 00:01:18.550 STDOUT terraform:  } 2025-04-13 00:01:18.550302 | orchestrator | 00:01:18.550 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.550328 | orchestrator | 00:01:18.550 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-13 00:01:18.550335 | orchestrator | 00:01:18.550 STDOUT terraform:  } 2025-04-13 00:01:18.550358 | orchestrator | 00:01:18.550 STDOUT terraform:  + binding (known after apply) 2025-04-13 00:01:18.550366 | orchestrator | 00:01:18.550 STDOUT terraform:  + fixed_ip { 2025-04-13 00:01:18.550394 | orchestrator | 00:01:18.550 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-04-13 00:01:18.550423 | orchestrator | 00:01:18.550 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-13 00:01:18.550431 | orchestrator | 00:01:18.550 STDOUT terraform:  } 2025-04-13 00:01:18.550449 | orchestrator | 00:01:18.550 STDOUT terraform:  } 2025-04-13 00:01:18.550494 | orchestrator | 00:01:18.550 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-04-13 00:01:18.550539 | orchestrator | 00:01:18.550 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-13 00:01:18.550578 | orchestrator | 00:01:18.550 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-13 00:01:18.550610 | orchestrator | 00:01:18.550 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-13 00:01:18.550639 | orchestrator | 00:01:18.550 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-13 00:01:18.550667 | orchestrator | 00:01:18.550 STDOUT terraform:  + all_tags = (known after apply) 2025-04-13 00:01:18.550703 | orchestrator | 00:01:18.550 STDOUT terraform:  + device_id = (known after apply) 2025-04-13 00:01:18.550738 | orchestrator | 00:01:18.550 STDOUT terraform:  + device_owner = (known after apply) 2025-04-13 00:01:18.550773 | orchestrator | 00:01:18.550 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-13 00:01:18.550813 | orchestrator | 00:01:18.550 STDOUT terraform:  + dns_name = (known after apply) 2025-04-13 00:01:18.550846 | orchestrator | 00:01:18.550 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.550883 | orchestrator | 00:01:18.550 STDOUT terraform:  + mac_address = (known after apply) 2025-04-13 00:01:18.550920 | orchestrator | 00:01:18.550 STDOUT terraform:  + network_id = (known after apply) 2025-04-13 00:01:18.550954 | orchestrator | 00:01:18.550 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-13 00:01:18.550990 | orchestrator | 00:01:18.550 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-13 00:01:18.551024 | orchestrator | 00:01:18.550 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.551058 | orchestrator | 00:01:18.551 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-13 00:01:18.551114 | orchestrator | 00:01:18.551 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-13 00:01:18.551140 | orchestrator | 00:01:18.551 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.551152 | orchestrator | 00:01:18.551 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-13 00:01:18.551170 | orchestrator | 00:01:18.551 STDOUT terraform:  } 2025-04-13 00:01:18.551177 | orchestrator | 00:01:18.551 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.551201 | orchestrator | 00:01:18.551 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-13 00:01:18.551222 | orchestrator | 00:01:18.551 STDOUT terraform:  } 2025-04-13 00:01:18.551230 | orchestrator | 00:01:18.551 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.551250 | orchestrator | 00:01:18.551 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-13 00:01:18.551257 | orchestrator | 00:01:18.551 STDOUT terraform:  } 2025-04-13 00:01:18.551278 | orchestrator | 00:01:18.551 STDOUT terraform:  + allowed_address_pairs { 2025-04-13 00:01:18.551307 | orchestrator | 00:01:18.551 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-13 00:01:18.551314 | orchestrator | 00:01:18.551 STDOUT terraform:  } 2025-04-13 00:01:18.551341 | orchestrator | 00:01:18.551 STDOUT terraform:  + binding (known after apply) 2025-04-13 00:01:18.551356 | orchestrator | 00:01:18.551 STDOUT terraform:  + fixed_ip { 2025-04-13 00:01:18.551381 | orchestrator | 00:01:18.551 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-04-13 00:01:18.551413 | orchestrator | 00:01:18.551 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-13 00:01:18.551420 | orchestrator | 00:01:18.551 STDOUT terraform:  } 2025-04-13 00:01:18.551427 | orchestrator | 00:01:18.551 STDOUT terraform:  } 2025-04-13 00:01:18.554117 | orchestrator | 00:01:18.551 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-04-13 00:01:18.554182 | orchestrator | 00:01:18.551 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-04-13 00:01:18.554189 | orchestrator | 00:01:18.551 STDOUT terraform:  + force_destroy = false 2025-04-13 00:01:18.554195 | orchestrator | 00:01:18.551 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.554201 | orchestrator | 00:01:18.551 STDOUT terraform:  + port_id = (known after apply) 2025-04-13 00:01:18.554206 | orchestrator | 00:01:18.551 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.554211 | orchestrator | 00:01:18.551 STDOUT terraform:  + router_id = (known after apply) 2025-04-13 00:01:18.554216 | orchestrator | 00:01:18.551 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-13 00:01:18.554221 | orchestrator | 00:01:18.551 STDOUT terraform:  } 2025-04-13 00:01:18.554226 | orchestrator | 00:01:18.551 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-04-13 00:01:18.554231 | orchestrator | 00:01:18.551 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-04-13 00:01:18.554245 | orchestrator | 00:01:18.551 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-13 00:01:18.554251 | orchestrator | 00:01:18.551 STDOUT terraform:  + all_tags = (known after apply) 2025-04-13 00:01:18.554256 | orchestrator | 00:01:18.551 STDOUT terraform:  + availability_zone_hints = [ 2025-04-13 00:01:18.554261 | orchestrator | 00:01:18.551 STDOUT terraform:  + "nova", 2025-04-13 00:01:18.554266 | orchestrator | 00:01:18.551 STDOUT terraform:  ] 2025-04-13 00:01:18.554271 | orchestrator | 00:01:18.551 STDOUT terraform:  + distributed = (known after apply) 2025-04-13 00:01:18.554277 | orchestrator | 00:01:18.552 STDOUT terraform:  + enable_snat = (known after apply) 2025-04-13 00:01:18.554282 | orchestrator | 00:01:18.552 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-04-13 00:01:18.554287 | orchestrator | 00:01:18.552 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.554292 | orchestrator | 00:01:18.552 STDOUT terraform:  + name = "testbed" 2025-04-13 00:01:18.554297 | orchestrator | 00:01:18.552 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.554301 | orchestrator | 00:01:18.552 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-13 00:01:18.554306 | orchestrator | 00:01:18.552 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-04-13 00:01:18.554311 | orchestrator | 00:01:18.552 STDOUT terraform:  } 2025-04-13 00:01:18.554316 | orchestrator | 00:01:18.552 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-04-13 00:01:18.554322 | orchestrator | 00:01:18.552 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-04-13 00:01:18.554327 | orchestrator | 00:01:18.552 STDOUT terraform:  + description = "ssh" 2025-04-13 00:01:18.554332 | orchestrator | 00:01:18.552 STDOUT terraform:  + direction = "ingress" 2025-04-13 00:01:18.554336 | orchestrator | 00:01:18.552 STDOUT terraform:  + ethertype = "IPv4" 2025-04-13 00:01:18.554341 | orchestrator | 00:01:18.552 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.554346 | orchestrator | 00:01:18.552 STDOUT terraform:  + port_range_max = 22 2025-04-13 00:01:18.554351 | orchestrator | 00:01:18.552 STDOUT terraform:  + port_range_min = 22 2025-04-13 00:01:18.554356 | orchestrator | 00:01:18.552 STDOUT terraform:  + protocol = "tcp" 2025-04-13 00:01:18.554361 | orchestrator | 00:01:18.552 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.554366 | orchestrator | 00:01:18.552 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-13 00:01:18.554371 | orchestrator | 00:01:18.552 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-13 00:01:18.554381 | orchestrator | 00:01:18.552 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-13 00:01:18.554386 | orchestrator | 00:01:18.552 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-13 00:01:18.554391 | orchestrator | 00:01:18.552 STDOUT terraform:  } 2025-04-13 00:01:18.554397 | orchestrator | 00:01:18.552 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-04-13 00:01:18.554405 | orchestrator | 00:01:18.552 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-04-13 00:01:18.554410 | orchestrator | 00:01:18.552 STDOUT terraform:  + description = "wireguard" 2025-04-13 00:01:18.554415 | orchestrator | 00:01:18.552 STDOUT terraform:  + direction = "ingress" 2025-04-13 00:01:18.554419 | orchestrator | 00:01:18.552 STDOUT terraform:  + ethertype = "IPv4" 2025-04-13 00:01:18.554424 | orchestrator | 00:01:18.552 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.554429 | orchestrator | 00:01:18.552 STDOUT terraform:  + port_range_max = 51820 2025-04-13 00:01:18.554434 | orchestrator | 00:01:18.552 STDOUT terraform:  + port_range_min = 51820 2025-04-13 00:01:18.554439 | orchestrator | 00:01:18.552 STDOUT terraform:  + protocol = "udp" 2025-04-13 00:01:18.554444 | orchestrator | 00:01:18.552 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.554449 | orchestrator | 00:01:18.552 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-13 00:01:18.554454 | orchestrator | 00:01:18.552 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-13 00:01:18.554459 | orchestrator | 00:01:18.552 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-13 00:01:18.554464 | orchestrator | 00:01:18.552 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-13 00:01:18.554469 | orchestrator | 00:01:18.552 STDOUT terraform:  } 2025-04-13 00:01:18.554474 | orchestrator | 00:01:18.552 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-04-13 00:01:18.554479 | orchestrator | 00:01:18.553 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-04-13 00:01:18.554484 | orchestrator | 00:01:18.553 STDOUT terraform:  + direction = "ingress" 2025-04-13 00:01:18.554489 | orchestrator | 00:01:18.553 STDOUT terraform:  + ethertype = "IPv4" 2025-04-13 00:01:18.554494 | orchestrator | 00:01:18.553 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.554498 | orchestrator | 00:01:18.553 STDOUT terraform:  + protocol = "tcp" 2025-04-13 00:01:18.554503 | orchestrator | 00:01:18.553 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.554508 | orchestrator | 00:01:18.553 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-13 00:01:18.554513 | orchestrator | 00:01:18.553 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-04-13 00:01:18.554518 | orchestrator | 00:01:18.553 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-13 00:01:18.554523 | orchestrator | 00:01:18.553 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-13 00:01:18.554528 | orchestrator | 00:01:18.553 STDOUT terraform:  } 2025-04-13 00:01:18.554533 | orchestrator | 00:01:18.553 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-04-13 00:01:18.554538 | orchestrator | 00:01:18.553 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-04-13 00:01:18.554546 | orchestrator | 00:01:18.553 STDOUT terraform:  + direction = "ingress" 2025-04-13 00:01:18.554551 | orchestrator | 00:01:18.553 STDOUT terraform:  + ethertype = "IPv4" 2025-04-13 00:01:18.554556 | orchestrator | 00:01:18.553 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.554561 | orchestrator | 00:01:18.553 STDOUT terraform:  + protocol = "udp" 2025-04-13 00:01:18.554568 | orchestrator | 00:01:18.553 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.554578 | orchestrator | 00:01:18.553 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-13 00:01:18.554584 | orchestrator | 00:01:18.553 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-04-13 00:01:18.554589 | orchestrator | 00:01:18.553 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-13 00:01:18.554594 | orchestrator | 00:01:18.553 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-13 00:01:18.554598 | orchestrator | 00:01:18.553 STDOUT terraform:  } 2025-04-13 00:01:18.554603 | orchestrator | 00:01:18.553 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-04-13 00:01:18.554608 | orchestrator | 00:01:18.553 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-04-13 00:01:18.554616 | orchestrator | 00:01:18.553 STDOUT terraform:  + direction = "ingress" 2025-04-13 00:01:18.554621 | orchestrator | 00:01:18.553 STDOUT terraform:  + ethertype = "IPv4" 2025-04-13 00:01:18.554625 | orchestrator | 00:01:18.553 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.554631 | orchestrator | 00:01:18.553 STDOUT terraform:  + protocol = "icmp" 2025-04-13 00:01:18.554636 | orchestrator | 00:01:18.553 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.554641 | orchestrator | 00:01:18.553 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-13 00:01:18.554645 | orchestrator | 00:01:18.553 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-13 00:01:18.554650 | orchestrator | 00:01:18.553 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-13 00:01:18.554655 | orchestrator | 00:01:18.553 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-13 00:01:18.554660 | orchestrator | 00:01:18.553 STDOUT terraform:  } 2025-04-13 00:01:18.554665 | orchestrator | 00:01:18.553 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-04-13 00:01:18.554670 | orchestrator | 00:01:18.553 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-04-13 00:01:18.554676 | orchestrator | 00:01:18.554 STDOUT terraform:  + direction = "ingress" 2025-04-13 00:01:18.554681 | orchestrator | 00:01:18.554 STDOUT terraform:  + ethertype = "IPv4" 2025-04-13 00:01:18.554686 | orchestrator | 00:01:18.554 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.554691 | orchestrator | 00:01:18.554 STDOUT terraform:  + protocol = "tcp" 2025-04-13 00:01:18.554696 | orchestrator | 00:01:18.554 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.554701 | orchestrator | 00:01:18.554 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-13 00:01:18.554709 | orchestrator | 00:01:18.554 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-13 00:01:18.554714 | orchestrator | 00:01:18.554 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-13 00:01:18.554719 | orchestrator | 00:01:18.554 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-13 00:01:18.554724 | orchestrator | 00:01:18.554 STDOUT terraform:  } 2025-04-13 00:01:18.554729 | orchestrator | 00:01:18.554 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-04-13 00:01:18.554734 | orchestrator | 00:01:18.554 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-04-13 00:01:18.554739 | orchestrator | 00:01:18.554 STDOUT terraform:  + direction = "ingress" 2025-04-13 00:01:18.554744 | orchestrator | 00:01:18.554 STDOUT terraform:  + ethertype = "IPv4" 2025-04-13 00:01:18.554749 | orchestrator | 00:01:18.554 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.554754 | orchestrator | 00:01:18.554 STDOUT terraform:  + protocol = "udp" 2025-04-13 00:01:18.554759 | orchestrator | 00:01:18.554 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.554766 | orchestrator | 00:01:18.554 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-13 00:01:18.554791 | orchestrator | 00:01:18.554 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-13 00:01:18.554797 | orchestrator | 00:01:18.554 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-13 00:01:18.554802 | orchestrator | 00:01:18.554 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-13 00:01:18.554807 | orchestrator | 00:01:18.554 STDOUT terraform:  } 2025-04-13 00:01:18.554812 | orchestrator | 00:01:18.554 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-04-13 00:01:18.554817 | orchestrator | 00:01:18.554 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-04-13 00:01:18.554822 | orchestrator | 00:01:18.554 STDOUT terraform:  + direction = "ingress" 2025-04-13 00:01:18.554827 | orchestrator | 00:01:18.554 STDOUT terraform:  + ethertype = "IPv4" 2025-04-13 00:01:18.554832 | orchestrator | 00:01:18.554 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.554837 | orchestrator | 00:01:18.554 STDOUT terraform:  + protocol = "icmp" 2025-04-13 00:01:18.554843 | orchestrator | 00:01:18.554 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.554860 | orchestrator | 00:01:18.554 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-13 00:01:18.554865 | orchestrator | 00:01:18.554 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-13 00:01:18.554872 | orchestrator | 00:01:18.554 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-13 00:01:18.554890 | orchestrator | 00:01:18.554 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-13 00:01:18.554958 | orchestrator | 00:01:18.554 STDOUT terraform:  } 2025-04-13 00:01:18.554967 | orchestrator | 00:01:18.554 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-04-13 00:01:18.555000 | orchestrator | 00:01:18.554 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-04-13 00:01:18.555025 | orchestrator | 00:01:18.554 STDOUT terraform:  + description = "vrrp" 2025-04-13 00:01:18.555044 | orchestrator | 00:01:18.555 STDOUT terraform:  + direction = "ingress" 2025-04-13 00:01:18.555051 | orchestrator | 00:01:18.555 STDOUT terraform:  + ethertype = "IPv4" 2025-04-13 00:01:18.555127 | orchestrator | 00:01:18.555 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.555136 | orchestrator | 00:01:18.555 STDOUT terraform:  + protocol = "112" 2025-04-13 00:01:18.555168 | orchestrator | 00:01:18.555 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.555202 | orchestrator | 00:01:18.555 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-13 00:01:18.555220 | orchestrator | 00:01:18.555 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-13 00:01:18.555250 | orchestrator | 00:01:18.555 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-13 00:01:18.555281 | orchestrator | 00:01:18.555 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-13 00:01:18.555344 | orchestrator | 00:01:18.555 STDOUT terraform:  } 2025-04-13 00:01:18.555352 | orchestrator | 00:01:18.555 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-04-13 00:01:18.555388 | orchestrator | 00:01:18.555 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-04-13 00:01:18.555417 | orchestrator | 00:01:18.555 STDOUT terraform:  + all_tags = (known after apply) 2025-04-13 00:01:18.555451 | orchestrator | 00:01:18.555 STDOUT terraform:  + description = "management security group" 2025-04-13 00:01:18.555480 | orchestrator | 00:01:18.555 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.555508 | orchestrator | 00:01:18.555 STDOUT terraform:  + name = "testbed-management" 2025-04-13 00:01:18.555536 | orchestrator | 00:01:18.555 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.555564 | orchestrator | 00:01:18.555 STDOUT terraform:  + stateful = (known after apply) 2025-04-13 00:01:18.555592 | orchestrator | 00:01:18.555 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-13 00:01:18.555599 | orchestrator | 00:01:18.555 STDOUT terraform:  } 2025-04-13 00:01:18.555647 | orchestrator | 00:01:18.555 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-04-13 00:01:18.555694 | orchestrator | 00:01:18.555 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-04-13 00:01:18.555722 | orchestrator | 00:01:18.555 STDOUT terraform:  + all_tags = (known after apply) 2025-04-13 00:01:18.555755 | orchestrator | 00:01:18.555 STDOUT terraform:  + description = "node security group" 2025-04-13 00:01:18.555784 | orchestrator | 00:01:18.555 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.555808 | orchestrator | 00:01:18.555 STDOUT terraform:  + name = "testbed-node" 2025-04-13 00:01:18.555836 | orchestrator | 00:01:18.555 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.555865 | orchestrator | 00:01:18.555 STDOUT terraform:  + stateful = (known after apply) 2025-04-13 00:01:18.555889 | orchestrator | 00:01:18.555 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-13 00:01:18.555896 | orchestrator | 00:01:18.555 STDOUT terraform:  } 2025-04-13 00:01:18.555942 | orchestrator | 00:01:18.555 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-04-13 00:01:18.555985 | orchestrator | 00:01:18.555 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-04-13 00:01:18.556015 | orchestrator | 00:01:18.555 STDOUT terraform:  + all_tags = (known after apply) 2025-04-13 00:01:18.556045 | orchestrator | 00:01:18.556 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-04-13 00:01:18.556063 | orchestrator | 00:01:18.556 STDOUT terraform:  + dns_nameservers = [ 2025-04-13 00:01:18.556070 | orchestrator | 00:01:18.556 STDOUT terraform:  + "8.8.8.8", 2025-04-13 00:01:18.556099 | orchestrator | 00:01:18.556 STDOUT terraform:  + "9.9.9.9", 2025-04-13 00:01:18.556106 | orchestrator | 00:01:18.556 STDOUT terraform:  ] 2025-04-13 00:01:18.556131 | orchestrator | 00:01:18.556 STDOUT terraform:  + enable_dhcp = true 2025-04-13 00:01:18.556162 | orchestrator | 00:01:18.556 STDOUT terraform:  + gateway_ip = (known after apply) 2025-04-13 00:01:18.556192 | orchestrator | 00:01:18.556 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.556210 | orchestrator | 00:01:18.556 STDOUT terraform:  + ip_version = 4 2025-04-13 00:01:18.556238 | orchestrator | 00:01:18.556 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-04-13 00:01:18.556268 | orchestrator | 00:01:18.556 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-04-13 00:01:18.556306 | orchestrator | 00:01:18.556 STDOUT terraform:  + name = "subnet-testbed-management" 2025-04-13 00:01:18.556337 | orchestrator | 00:01:18.556 STDOUT terraform:  + network_id = (known after apply) 2025-04-13 00:01:18.556355 | orchestrator | 00:01:18.556 STDOUT terraform:  + no_gateway = false 2025-04-13 00:01:18.556384 | orchestrator | 00:01:18.556 STDOUT terraform:  + region = (known after apply) 2025-04-13 00:01:18.556413 | orchestrator | 00:01:18.556 STDOUT terraform:  + service_types = (known after apply) 2025-04-13 00:01:18.556443 | orchestrator | 00:01:18.556 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-13 00:01:18.556450 | orchestrator | 00:01:18.556 STDOUT terraform:  + allocation_pool { 2025-04-13 00:01:18.556482 | orchestrator | 00:01:18.556 STDOUT terraform:  + end = "192.168.31.250" 2025-04-13 00:01:18.556506 | orchestrator | 00:01:18.556 STDOUT terraform:  + start = "192.168.31.200" 2025-04-13 00:01:18.556513 | orchestrator | 00:01:18.556 STDOUT terraform:  } 2025-04-13 00:01:18.556519 | orchestrator | 00:01:18.556 STDOUT terraform:  } 2025-04-13 00:01:18.556549 | orchestrator | 00:01:18.556 STDOUT terraform:  # terraform_data.image will be created 2025-04-13 00:01:18.556574 | orchestrator | 00:01:18.556 STDOUT terraform:  + resource "terraform_data" "image" { 2025-04-13 00:01:18.556598 | orchestrator | 00:01:18.556 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.556617 | orchestrator | 00:01:18.556 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-04-13 00:01:18.556627 | orchestrator | 00:01:18.556 STDOUT terraform:  + output = (known after apply) 2025-04-13 00:01:18.556644 | orchestrator | 00:01:18.556 STDOUT terraform:  } 2025-04-13 00:01:18.556672 | orchestrator | 00:01:18.556 STDOUT terraform:  # terraform_data.image_node will be created 2025-04-13 00:01:18.556701 | orchestrator | 00:01:18.556 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-04-13 00:01:18.556724 | orchestrator | 00:01:18.556 STDOUT terraform:  + id = (known after apply) 2025-04-13 00:01:18.556742 | orchestrator | 00:01:18.556 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-04-13 00:01:18.556767 | orchestrator | 00:01:18.556 STDOUT terraform:  + output = (known after apply) 2025-04-13 00:01:18.556802 | orchestrator | 00:01:18.556 STDOUT terraform:  } 2025-04-13 00:01:18.556809 | orchestrator | 00:01:18.556 STDOUT terraform: Plan: 82 to add, 0 to change, 0 to destroy. 2025-04-13 00:01:18.556836 | orchestrator | 00:01:18.556 STDOUT terraform: Changes to Outputs: 2025-04-13 00:01:18.556843 | orchestrator | 00:01:18.556 STDOUT terraform:  + manager_address = (sensitive value) 2025-04-13 00:01:18.556866 | orchestrator | 00:01:18.556 STDOUT terraform:  + private_key = (sensitive value) 2025-04-13 00:01:18.776675 | orchestrator | 00:01:18.776 STDOUT terraform: terraform_data.image_node: Creating... 2025-04-13 00:01:18.777463 | orchestrator | 00:01:18.776 STDOUT terraform: terraform_data.image: Creating... 2025-04-13 00:01:18.777501 | orchestrator | 00:01:18.777 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=497c9ece-0beb-d6bb-e300-7fd268ae2243] 2025-04-13 00:01:18.790464 | orchestrator | 00:01:18.777 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=6883f412-d9d4-b39c-d6be-dc8968a8c32c] 2025-04-13 00:01:18.790520 | orchestrator | 00:01:18.790 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-04-13 00:01:18.791683 | orchestrator | 00:01:18.791 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-04-13 00:01:18.799543 | orchestrator | 00:01:18.799 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-04-13 00:01:18.800312 | orchestrator | 00:01:18.799 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-04-13 00:01:18.802201 | orchestrator | 00:01:18.800 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-04-13 00:01:18.802316 | orchestrator | 00:01:18.801 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creating... 2025-04-13 00:01:18.803251 | orchestrator | 00:01:18.803 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-04-13 00:01:18.803304 | orchestrator | 00:01:18.803 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creating... 2025-04-13 00:01:18.809917 | orchestrator | 00:01:18.809 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creating... 2025-04-13 00:01:18.811815 | orchestrator | 00:01:18.811 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-04-13 00:01:19.232301 | orchestrator | 00:01:19.231 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-04-13 00:01:19.239254 | orchestrator | 00:01:19.239 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-04-13 00:01:19.248686 | orchestrator | 00:01:19.248 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-04-13 00:01:19.257153 | orchestrator | 00:01:19.256 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creating... 2025-04-13 00:01:19.521601 | orchestrator | 00:01:19.521 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-04-13 00:01:19.531889 | orchestrator | 00:01:19.531 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-04-13 00:01:24.603809 | orchestrator | 00:01:24.603 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=d905a98d-476d-4b14-b5f7-7d63cc27ea2f] 2025-04-13 00:01:24.611117 | orchestrator | 00:01:24.610 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creating... 2025-04-13 00:01:28.803831 | orchestrator | 00:01:28.803 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-04-13 00:01:28.806075 | orchestrator | 00:01:28.805 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Still creating... [10s elapsed] 2025-04-13 00:01:28.807106 | orchestrator | 00:01:28.805 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Still creating... [10s elapsed] 2025-04-13 00:01:28.807166 | orchestrator | 00:01:28.806 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-04-13 00:01:28.811398 | orchestrator | 00:01:28.811 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Still creating... [10s elapsed] 2025-04-13 00:01:28.813821 | orchestrator | 00:01:28.813 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-04-13 00:01:29.239849 | orchestrator | 00:01:29.239 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-04-13 00:01:29.258443 | orchestrator | 00:01:29.258 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Still creating... [10s elapsed] 2025-04-13 00:01:29.393862 | orchestrator | 00:01:29.393 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=f5c76205-09bb-4a16-ab8f-39ffb03c9143] 2025-04-13 00:01:29.400630 | orchestrator | 00:01:29.400 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creating... 2025-04-13 00:01:29.414731 | orchestrator | 00:01:29.414 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=71ac43d1-dda3-4017-bb0e-4637e963cb04] 2025-04-13 00:01:29.421074 | orchestrator | 00:01:29.420 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creating... 2025-04-13 00:01:29.432203 | orchestrator | 00:01:29.431 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creation complete after 10s [id=d62d4166-25a1-4741-94fc-59c78379b097] 2025-04-13 00:01:29.436908 | orchestrator | 00:01:29.436 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-04-13 00:01:29.446624 | orchestrator | 00:01:29.446 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=f7620518-2044-4595-90df-c620cad18d8d] 2025-04-13 00:01:29.452848 | orchestrator | 00:01:29.452 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creating... 2025-04-13 00:01:29.468982 | orchestrator | 00:01:29.468 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creation complete after 10s [id=15f38305-5d3a-4a2a-94a9-ec4f360f12f0] 2025-04-13 00:01:29.474473 | orchestrator | 00:01:29.474 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-04-13 00:01:29.492968 | orchestrator | 00:01:29.492 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creation complete after 10s [id=24d70fc8-7961-4caf-9f39-267d5072f1bc] 2025-04-13 00:01:29.503875 | orchestrator | 00:01:29.503 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-04-13 00:01:29.517407 | orchestrator | 00:01:29.517 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creation complete after 11s [id=95b24700-cfbe-4d9d-a7ca-ca6e4d2b6d43] 2025-04-13 00:01:29.517694 | orchestrator | 00:01:29.517 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=ea334510-65a0-4c82-ab7f-212ffba0ceeb] 2025-04-13 00:01:29.523134 | orchestrator | 00:01:29.522 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-04-13 00:01:29.523409 | orchestrator | 00:01:29.523 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creating... 2025-04-13 00:01:29.532399 | orchestrator | 00:01:29.532 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-04-13 00:01:29.704145 | orchestrator | 00:01:29.703 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=ddf16837-33ca-409f-b739-a4d4760cfc5d] 2025-04-13 00:01:29.715188 | orchestrator | 00:01:29.714 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-04-13 00:01:34.611988 | orchestrator | 00:01:34.611 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Still creating... [10s elapsed] 2025-04-13 00:01:34.794182 | orchestrator | 00:01:34.793 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creation complete after 10s [id=aad8aa45-f541-429b-bfb0-28cd3fbd229c] 2025-04-13 00:01:34.803382 | orchestrator | 00:01:34.803 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-04-13 00:01:39.401900 | orchestrator | 00:01:39.401 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Still creating... [10s elapsed] 2025-04-13 00:01:39.421711 | orchestrator | 00:01:39.421 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Still creating... [10s elapsed] 2025-04-13 00:01:39.437873 | orchestrator | 00:01:39.437 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-04-13 00:01:39.454291 | orchestrator | 00:01:39.454 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Still creating... [10s elapsed] 2025-04-13 00:01:39.475555 | orchestrator | 00:01:39.475 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-04-13 00:01:39.504988 | orchestrator | 00:01:39.504 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-04-13 00:01:39.524522 | orchestrator | 00:01:39.524 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Still creating... [10s elapsed] 2025-04-13 00:01:39.571328 | orchestrator | 00:01:39.524 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-04-13 00:01:39.571455 | orchestrator | 00:01:39.570 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creation complete after 11s [id=d771f52a-9ada-4427-8de2-0003eafe1256] 2025-04-13 00:01:39.586569 | orchestrator | 00:01:39.586 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-04-13 00:01:39.609074 | orchestrator | 00:01:39.608 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creation complete after 11s [id=0bf36b3b-f07e-4ca4-96cb-185377001260] 2025-04-13 00:01:39.619453 | orchestrator | 00:01:39.619 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-04-13 00:01:39.641584 | orchestrator | 00:01:39.641 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=beb6d58d-9f9a-40a9-9a80-602a3ce24890] 2025-04-13 00:01:39.651929 | orchestrator | 00:01:39.651 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-04-13 00:01:39.661057 | orchestrator | 00:01:39.660 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creation complete after 11s [id=7742d708-e0a6-4322-a2de-81c274934e05] 2025-04-13 00:01:39.672083 | orchestrator | 00:01:39.671 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-04-13 00:01:39.675822 | orchestrator | 00:01:39.675 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=af081516d4c5fdc6f01d62caccaa0aeaf5fd3e3b] 2025-04-13 00:01:39.680564 | orchestrator | 00:01:39.680 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-04-13 00:01:39.696223 | orchestrator | 00:01:39.695 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=bd3f4097-e1b2-4e0f-b572-2003c7cd8b15] 2025-04-13 00:01:39.702842 | orchestrator | 00:01:39.702 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-04-13 00:01:39.709209 | orchestrator | 00:01:39.708 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=fa72ddf05e397c2b135159b90eeceef08129ec3b] 2025-04-13 00:01:39.716357 | orchestrator | 00:01:39.716 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-04-13 00:01:39.717712 | orchestrator | 00:01:39.717 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=466f66ff-268f-471d-abe8-9f0f353ab0cc] 2025-04-13 00:01:39.720207 | orchestrator | 00:01:39.719 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creation complete after 10s [id=a0e179ac-f513-4bce-8698-5c5d77bb97a6] 2025-04-13 00:01:39.721571 | orchestrator | 00:01:39.721 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-04-13 00:01:39.723163 | orchestrator | 00:01:39.722 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-04-13 00:01:39.731416 | orchestrator | 00:01:39.731 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=9b430468-eb80-4fc4-b9b2-ed2873d86014] 2025-04-13 00:01:40.036679 | orchestrator | 00:01:40.036 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=13eeac56-881b-4135-903b-092bb0900c0a] 2025-04-13 00:01:44.804563 | orchestrator | 00:01:44.804 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-04-13 00:01:45.221972 | orchestrator | 00:01:45.221 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=c70cca57-340b-42ee-85c2-b3ee41d2b128] 2025-04-13 00:01:45.613227 | orchestrator | 00:01:45.612 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=fdcbde1e-dd1e-418d-b767-c53937a17d4c] 2025-04-13 00:01:45.622315 | orchestrator | 00:01:45.622 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-04-13 00:01:49.587089 | orchestrator | 00:01:49.586 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-04-13 00:01:49.620464 | orchestrator | 00:01:49.620 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-04-13 00:01:49.652810 | orchestrator | 00:01:49.652 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-04-13 00:01:49.681396 | orchestrator | 00:01:49.681 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-04-13 00:01:49.723016 | orchestrator | 00:01:49.722 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-04-13 00:01:49.978874 | orchestrator | 00:01:49.978 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=fd403a5d-f47e-4cc0-967a-066b990b05e8] 2025-04-13 00:01:49.979522 | orchestrator | 00:01:49.979 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=784fd8b6-165f-4d54-8bd6-d3b5fe38df06] 2025-04-13 00:01:49.988077 | orchestrator | 00:01:49.987 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=f620be22-b7d1-409f-9583-d71db6137099] 2025-04-13 00:01:50.041295 | orchestrator | 00:01:50.040 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=bd1b3b5b-24e0-4b83-98ac-551986a77df7] 2025-04-13 00:01:50.042241 | orchestrator | 00:01:50.041 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=fe299df1-123f-45eb-a46f-1bc77e9ea0d1] 2025-04-13 00:01:52.280179 | orchestrator | 00:01:52.279 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 6s [id=f2cba75c-ba90-4340-84dc-97530c04eb7f] 2025-04-13 00:01:52.290462 | orchestrator | 00:01:52.290 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-04-13 00:01:52.293604 | orchestrator | 00:01:52.293 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-04-13 00:01:52.485799 | orchestrator | 00:01:52.293 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-04-13 00:01:52.485932 | orchestrator | 00:01:52.485 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=5ec173c4-0d71-4aa4-87f7-84f818cb71d7] 2025-04-13 00:01:52.501355 | orchestrator | 00:01:52.501 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-04-13 00:01:52.501442 | orchestrator | 00:01:52.501 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-04-13 00:01:52.501512 | orchestrator | 00:01:52.501 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-04-13 00:01:52.501546 | orchestrator | 00:01:52.501 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-04-13 00:01:52.501601 | orchestrator | 00:01:52.501 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-04-13 00:01:52.502575 | orchestrator | 00:01:52.502 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-04-13 00:01:52.582155 | orchestrator | 00:01:52.581 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=df12c313-beef-4ab7-82df-686e29f8dd8c] 2025-04-13 00:01:52.591747 | orchestrator | 00:01:52.591 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-04-13 00:01:52.594943 | orchestrator | 00:01:52.594 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-04-13 00:01:52.599089 | orchestrator | 00:01:52.598 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-04-13 00:01:52.628921 | orchestrator | 00:01:52.628 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=45480cbc-fadd-484b-81ed-82e6f7bf2bd4] 2025-04-13 00:01:52.635894 | orchestrator | 00:01:52.635 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-04-13 00:01:52.788129 | orchestrator | 00:01:52.787 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=8efaadc4-fe78-45bf-9185-a15430a0a2fa] 2025-04-13 00:01:52.795599 | orchestrator | 00:01:52.795 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=0fd01754-aafa-40a0-8ab9-4edb81337c6f] 2025-04-13 00:01:52.807433 | orchestrator | 00:01:52.807 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-04-13 00:01:52.811075 | orchestrator | 00:01:52.810 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-04-13 00:01:52.975835 | orchestrator | 00:01:52.975 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=7eb700c9-95ee-45fa-96cf-1ccc801d564f] 2025-04-13 00:01:52.989201 | orchestrator | 00:01:52.988 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-04-13 00:01:53.054337 | orchestrator | 00:01:53.053 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=db153587-b9e9-43a7-a72f-e95551191004] 2025-04-13 00:01:53.073321 | orchestrator | 00:01:53.072 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-04-13 00:01:53.107724 | orchestrator | 00:01:53.107 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=0d85dfb2-d348-43a5-96fd-090147d5e140] 2025-04-13 00:01:53.122003 | orchestrator | 00:01:53.121 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-04-13 00:01:53.271956 | orchestrator | 00:01:53.271 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=955b17f8-969d-4047-8f42-8425771de3f2] 2025-04-13 00:01:53.279412 | orchestrator | 00:01:53.279 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-04-13 00:01:53.299380 | orchestrator | 00:01:53.299 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=b0fd510d-9466-47e1-a1b0-6ccfacdd394b] 2025-04-13 00:01:53.539950 | orchestrator | 00:01:53.539 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=98ad9b99-e6a5-4ed8-a4f4-b4c8ae7bf8c1] 2025-04-13 00:01:58.308814 | orchestrator | 00:01:58.308 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 5s [id=2554a1cb-da93-4094-b75c-14e91b4dfd2b] 2025-04-13 00:01:58.368311 | orchestrator | 00:01:58.367 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 5s [id=a743449f-2df2-4a13-9c3b-a6c0a649e1e9] 2025-04-13 00:01:58.543401 | orchestrator | 00:01:58.542 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=54c4730d-da67-4fc5-a07c-95aa51118e99] 2025-04-13 00:01:58.568704 | orchestrator | 00:01:58.568 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=8236dfd3-75bb-4b29-b970-b86026d046f3] 2025-04-13 00:01:58.771970 | orchestrator | 00:01:58.771 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=14d52214-ec5e-4d92-9777-214ac32b7e9a] 2025-04-13 00:01:59.125924 | orchestrator | 00:01:59.125 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=287c30fc-1bf5-4226-92d4-e8fa4d79781b] 2025-04-13 00:01:59.199691 | orchestrator | 00:01:59.198 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=cd4690ad-70a1-406a-a379-dd8fa74a1614] 2025-04-13 00:01:59.206152 | orchestrator | 00:01:59.205 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-04-13 00:01:59.325788 | orchestrator | 00:01:59.325 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=df95a158-9b8c-4a31-ac55-37e93b0ed8d4] 2025-04-13 00:01:59.350876 | orchestrator | 00:01:59.350 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-04-13 00:01:59.359944 | orchestrator | 00:01:59.359 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-04-13 00:01:59.361507 | orchestrator | 00:01:59.361 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-04-13 00:01:59.361809 | orchestrator | 00:01:59.361 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-04-13 00:01:59.372630 | orchestrator | 00:01:59.372 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-04-13 00:01:59.376472 | orchestrator | 00:01:59.376 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-04-13 00:02:05.675447 | orchestrator | 00:02:05.675 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=7316d266-8046-430d-b5f9-2b0662ed3f16] 2025-04-13 00:02:05.696907 | orchestrator | 00:02:05.696 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-04-13 00:02:05.703163 | orchestrator | 00:02:05.703 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-04-13 00:02:05.704857 | orchestrator | 00:02:05.704 STDOUT terraform: local_file.inventory: Creating... 2025-04-13 00:02:05.706769 | orchestrator | 00:02:05.706 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=9c10f8f3ee2b8c84024df755d911fc944e529260] 2025-04-13 00:02:05.711883 | orchestrator | 00:02:05.711 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=46bf0c4cc2f224098f0a58f14dac7495cdc94038] 2025-04-13 00:02:06.427754 | orchestrator | 00:02:06.427 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=7316d266-8046-430d-b5f9-2b0662ed3f16] 2025-04-13 00:02:09.351895 | orchestrator | 00:02:09.351 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-04-13 00:02:09.362920 | orchestrator | 00:02:09.362 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-04-13 00:02:09.367241 | orchestrator | 00:02:09.367 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-04-13 00:02:09.369482 | orchestrator | 00:02:09.369 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-04-13 00:02:09.374650 | orchestrator | 00:02:09.374 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-04-13 00:02:09.377024 | orchestrator | 00:02:09.376 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-04-13 00:02:19.352166 | orchestrator | 00:02:19.351 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-04-13 00:02:19.364499 | orchestrator | 00:02:19.364 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-04-13 00:02:19.367705 | orchestrator | 00:02:19.367 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-04-13 00:02:19.370932 | orchestrator | 00:02:19.370 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-04-13 00:02:19.375258 | orchestrator | 00:02:19.375 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-04-13 00:02:19.377594 | orchestrator | 00:02:19.377 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-04-13 00:02:19.839054 | orchestrator | 00:02:19.838 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=cbee3e78-e7fb-4571-84d8-c0c1f78fdf2c] 2025-04-13 00:02:19.844489 | orchestrator | 00:02:19.844 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=4d25a5f8-23d2-43a0-9faa-134d7f63261f] 2025-04-13 00:02:19.890285 | orchestrator | 00:02:19.889 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=88c853c0-c9ca-4cc7-a4b7-0291ea88192c] 2025-04-13 00:02:19.994448 | orchestrator | 00:02:19.994 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=21174c4b-da6a-4268-8a8e-9d4fc4370f83] 2025-04-13 00:02:29.372238 | orchestrator | 00:02:29.371 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-04-13 00:02:29.378544 | orchestrator | 00:02:29.378 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-04-13 00:02:29.984144 | orchestrator | 00:02:29.983 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=da3f2a04-725c-45bb-8c01-ae420a3e2217] 2025-04-13 00:02:30.972434 | orchestrator | 00:02:30.972 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 32s [id=888e1b5e-85b6-4e96-aac8-4cac9851c9dd] 2025-04-13 00:02:31.000802 | orchestrator | 00:02:31.000 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-04-13 00:02:31.004201 | orchestrator | 00:02:31.003 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-04-13 00:02:31.004934 | orchestrator | 00:02:31.004 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-04-13 00:02:31.005198 | orchestrator | 00:02:31.005 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=6932612690877894697] 2025-04-13 00:02:31.009625 | orchestrator | 00:02:31.009 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-04-13 00:02:31.014113 | orchestrator | 00:02:31.013 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creating... 2025-04-13 00:02:31.023839 | orchestrator | 00:02:31.022 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creating... 2025-04-13 00:02:31.025286 | orchestrator | 00:02:31.025 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creating... 2025-04-13 00:02:31.027448 | orchestrator | 00:02:31.027 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-04-13 00:02:31.030269 | orchestrator | 00:02:31.030 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creating... 2025-04-13 00:02:31.034080 | orchestrator | 00:02:31.033 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creating... 2025-04-13 00:02:31.038933 | orchestrator | 00:02:31.038 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-04-13 00:02:36.326858 | orchestrator | 00:02:36.326 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=da3f2a04-725c-45bb-8c01-ae420a3e2217/71ac43d1-dda3-4017-bb0e-4637e963cb04] 2025-04-13 00:02:36.346898 | orchestrator | 00:02:36.346 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creating... 2025-04-13 00:02:36.354492 | orchestrator | 00:02:36.354 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=88c853c0-c9ca-4cc7-a4b7-0291ea88192c/bd3f4097-e1b2-4e0f-b572-2003c7cd8b15] 2025-04-13 00:02:36.357850 | orchestrator | 00:02:36.357 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creation complete after 5s [id=21174c4b-da6a-4268-8a8e-9d4fc4370f83/d771f52a-9ada-4427-8de2-0003eafe1256] 2025-04-13 00:02:36.366596 | orchestrator | 00:02:36.366 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creation complete after 5s [id=888e1b5e-85b6-4e96-aac8-4cac9851c9dd/7742d708-e0a6-4322-a2de-81c274934e05] 2025-04-13 00:02:36.368897 | orchestrator | 00:02:36.368 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creating... 2025-04-13 00:02:36.379944 | orchestrator | 00:02:36.379 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-04-13 00:02:36.380798 | orchestrator | 00:02:36.380 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=4d25a5f8-23d2-43a0-9faa-134d7f63261f/ea334510-65a0-4c82-ab7f-212ffba0ceeb] 2025-04-13 00:02:36.398883 | orchestrator | 00:02:36.398 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=888e1b5e-85b6-4e96-aac8-4cac9851c9dd/f7620518-2044-4595-90df-c620cad18d8d] 2025-04-13 00:02:36.399204 | orchestrator | 00:02:36.399 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creation complete after 5s [id=88c853c0-c9ca-4cc7-a4b7-0291ea88192c/24d70fc8-7961-4caf-9f39-267d5072f1bc] 2025-04-13 00:02:36.399344 | orchestrator | 00:02:36.399 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creating... 2025-04-13 00:02:36.399612 | orchestrator | 00:02:36.399 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creating... 2025-04-13 00:02:36.409630 | orchestrator | 00:02:36.409 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-04-13 00:02:36.413009 | orchestrator | 00:02:36.412 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=21174c4b-da6a-4268-8a8e-9d4fc4370f83/466f66ff-268f-471d-abe8-9f0f353ab0cc] 2025-04-13 00:02:36.414811 | orchestrator | 00:02:36.414 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-04-13 00:02:36.424594 | orchestrator | 00:02:36.423 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-04-13 00:02:36.427971 | orchestrator | 00:02:36.424 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creation complete after 5s [id=4d25a5f8-23d2-43a0-9faa-134d7f63261f/aad8aa45-f541-429b-bfb0-28cd3fbd229c] 2025-04-13 00:02:36.428025 | orchestrator | 00:02:36.427 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creation complete after 5s [id=88c853c0-c9ca-4cc7-a4b7-0291ea88192c/d62d4166-25a1-4741-94fc-59c78379b097] 2025-04-13 00:02:36.441043 | orchestrator | 00:02:36.440 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-04-13 00:02:41.668487 | orchestrator | 00:02:41.667 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creation complete after 6s [id=4d25a5f8-23d2-43a0-9faa-134d7f63261f/a0e179ac-f513-4bce-8698-5c5d77bb97a6] 2025-04-13 00:02:41.704968 | orchestrator | 00:02:41.704 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creation complete after 6s [id=da3f2a04-725c-45bb-8c01-ae420a3e2217/0bf36b3b-f07e-4ca4-96cb-185377001260] 2025-04-13 00:02:41.722474 | orchestrator | 00:02:41.721 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=cbee3e78-e7fb-4571-84d8-c0c1f78fdf2c/9b430468-eb80-4fc4-b9b2-ed2873d86014] 2025-04-13 00:02:41.732200 | orchestrator | 00:02:41.731 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creation complete after 6s [id=21174c4b-da6a-4268-8a8e-9d4fc4370f83/15f38305-5d3a-4a2a-94a9-ec4f360f12f0] 2025-04-13 00:02:41.737773 | orchestrator | 00:02:41.737 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=888e1b5e-85b6-4e96-aac8-4cac9851c9dd/beb6d58d-9f9a-40a9-9a80-602a3ce24890] 2025-04-13 00:02:41.747498 | orchestrator | 00:02:41.747 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=da3f2a04-725c-45bb-8c01-ae420a3e2217/ddf16837-33ca-409f-b739-a4d4760cfc5d] 2025-04-13 00:02:41.755528 | orchestrator | 00:02:41.755 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creation complete after 6s [id=cbee3e78-e7fb-4571-84d8-c0c1f78fdf2c/95b24700-cfbe-4d9d-a7ca-ca6e4d2b6d43] 2025-04-13 00:02:41.765664 | orchestrator | 00:02:41.765 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=cbee3e78-e7fb-4571-84d8-c0c1f78fdf2c/f5c76205-09bb-4a16-ab8f-39ffb03c9143] 2025-04-13 00:02:46.442100 | orchestrator | 00:02:46.441 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-04-13 00:02:56.443348 | orchestrator | 00:02:56.443 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-04-13 00:02:57.069779 | orchestrator | 00:02:57.069 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=b83b98c7-bcee-49af-ac79-85aba18ff2b8] 2025-04-13 00:02:57.094764 | orchestrator | 00:02:57.094 STDOUT terraform: Apply complete! Resources: 82 added, 0 changed, 0 destroyed. 2025-04-13 00:02:57.103834 | orchestrator | 00:02:57.094 STDOUT terraform: Outputs: 2025-04-13 00:02:57.103948 | orchestrator | 00:02:57.094 STDOUT terraform: manager_address = 2025-04-13 00:02:57.104119 | orchestrator | 00:02:57.094 STDOUT terraform: private_key = 2025-04-13 00:03:07.359285 | orchestrator | changed 2025-04-13 00:03:07.397184 | 2025-04-13 00:03:07.397286 | TASK [Fetch manager address] 2025-04-13 00:03:07.797313 | orchestrator | ok 2025-04-13 00:03:07.809481 | 2025-04-13 00:03:07.809584 | TASK [Set manager_host address] 2025-04-13 00:03:07.914158 | orchestrator | ok 2025-04-13 00:03:07.925459 | 2025-04-13 00:03:07.925557 | LOOP [Update ansible collections] 2025-04-13 00:03:08.767690 | orchestrator | changed 2025-04-13 00:03:09.578750 | orchestrator | changed 2025-04-13 00:03:09.598568 | 2025-04-13 00:03:09.598673 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-04-13 00:03:20.112984 | orchestrator | ok 2025-04-13 00:03:20.126508 | 2025-04-13 00:03:20.126633 | TASK [Wait a little longer for the manager so that everything is ready] 2025-04-13 00:04:20.174968 | orchestrator | ok 2025-04-13 00:04:20.188537 | 2025-04-13 00:04:20.188737 | TASK [Fetch manager ssh hostkey] 2025-04-13 00:04:21.243241 | orchestrator | Output suppressed because no_log was given 2025-04-13 00:04:21.255035 | 2025-04-13 00:04:21.255162 | TASK [Get ssh keypair from terraform environment] 2025-04-13 00:04:21.835571 | orchestrator | changed 2025-04-13 00:04:21.856343 | 2025-04-13 00:04:21.856502 | TASK [Point out that the following task takes some time and does not give any output] 2025-04-13 00:04:21.906434 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-04-13 00:04:21.916739 | 2025-04-13 00:04:21.916848 | TASK [Run manager part 0] 2025-04-13 00:04:22.789266 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-04-13 00:04:22.832854 | orchestrator | 2025-04-13 00:04:24.744345 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-04-13 00:04:24.744423 | orchestrator | 2025-04-13 00:04:24.744447 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-04-13 00:04:24.744468 | orchestrator | ok: [testbed-manager] 2025-04-13 00:04:26.675205 | orchestrator | 2025-04-13 00:04:26.675426 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-04-13 00:04:26.675456 | orchestrator | 2025-04-13 00:04:26.675471 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-13 00:04:26.675496 | orchestrator | ok: [testbed-manager] 2025-04-13 00:04:27.376461 | orchestrator | 2025-04-13 00:04:27.376536 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-04-13 00:04:27.376555 | orchestrator | ok: [testbed-manager] 2025-04-13 00:04:27.430826 | orchestrator | 2025-04-13 00:04:27.430881 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-04-13 00:04:27.430899 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:04:27.472909 | orchestrator | 2025-04-13 00:04:27.472973 | orchestrator | TASK [Update package cache] **************************************************** 2025-04-13 00:04:27.472990 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:04:27.507415 | orchestrator | 2025-04-13 00:04:27.507484 | orchestrator | TASK [Install required packages] *********************************************** 2025-04-13 00:04:27.507501 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:04:27.546242 | orchestrator | 2025-04-13 00:04:27.546328 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-04-13 00:04:27.546354 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:04:27.584641 | orchestrator | 2025-04-13 00:04:27.584736 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-04-13 00:04:27.584764 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:04:27.628472 | orchestrator | 2025-04-13 00:04:27.628534 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-04-13 00:04:27.628550 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:04:27.664082 | orchestrator | 2025-04-13 00:04:27.664146 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-04-13 00:04:27.664164 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:04:28.520885 | orchestrator | 2025-04-13 00:04:28.520964 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-04-13 00:04:28.520982 | orchestrator | changed: [testbed-manager] 2025-04-13 00:07:23.911803 | orchestrator | 2025-04-13 00:07:23.911929 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-04-13 00:07:23.911969 | orchestrator | changed: [testbed-manager] 2025-04-13 00:08:40.683584 | orchestrator | 2025-04-13 00:08:40.683709 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-04-13 00:08:40.683745 | orchestrator | changed: [testbed-manager] 2025-04-13 00:09:06.753288 | orchestrator | 2025-04-13 00:09:06.753467 | orchestrator | TASK [Install required packages] *********************************************** 2025-04-13 00:09:06.753537 | orchestrator | changed: [testbed-manager] 2025-04-13 00:09:16.795137 | orchestrator | 2025-04-13 00:09:16.795252 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-04-13 00:09:16.795293 | orchestrator | changed: [testbed-manager] 2025-04-13 00:09:16.843071 | orchestrator | 2025-04-13 00:09:16.843152 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-04-13 00:09:16.843182 | orchestrator | ok: [testbed-manager] 2025-04-13 00:09:17.637968 | orchestrator | 2025-04-13 00:09:17.638100 | orchestrator | TASK [Get current user] ******************************************************** 2025-04-13 00:09:17.638135 | orchestrator | ok: [testbed-manager] 2025-04-13 00:09:18.410693 | orchestrator | 2025-04-13 00:09:18.410800 | orchestrator | TASK [Create venv directory] *************************************************** 2025-04-13 00:09:18.410844 | orchestrator | changed: [testbed-manager] 2025-04-13 00:09:24.973295 | orchestrator | 2025-04-13 00:09:24.973474 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-04-13 00:09:24.973518 | orchestrator | changed: [testbed-manager] 2025-04-13 00:09:31.242876 | orchestrator | 2025-04-13 00:09:31.242959 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-04-13 00:09:31.242993 | orchestrator | changed: [testbed-manager] 2025-04-13 00:09:33.912005 | orchestrator | 2025-04-13 00:09:33.912119 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-04-13 00:09:33.912156 | orchestrator | changed: [testbed-manager] 2025-04-13 00:09:35.705344 | orchestrator | 2025-04-13 00:09:35.705483 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-04-13 00:09:35.705524 | orchestrator | changed: [testbed-manager] 2025-04-13 00:09:36.816301 | orchestrator | 2025-04-13 00:09:36.816351 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-04-13 00:09:36.816369 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-04-13 00:09:36.862917 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-04-13 00:09:36.863014 | orchestrator | 2025-04-13 00:09:36.863037 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-04-13 00:09:36.863071 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-04-13 00:09:40.027564 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-04-13 00:09:40.027677 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-04-13 00:09:40.027696 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-04-13 00:09:40.027727 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-04-13 00:09:40.599097 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-04-13 00:09:40.599201 | orchestrator | 2025-04-13 00:09:40.599221 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-04-13 00:09:40.599250 | orchestrator | changed: [testbed-manager] 2025-04-13 00:10:01.998888 | orchestrator | 2025-04-13 00:10:01.998993 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-04-13 00:10:01.999024 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-04-13 00:10:04.393985 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-04-13 00:10:04.394133 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-04-13 00:10:04.394153 | orchestrator | 2025-04-13 00:10:04.394171 | orchestrator | TASK [Install local collections] *********************************************** 2025-04-13 00:10:04.394201 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-04-13 00:10:05.879111 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-04-13 00:10:05.879236 | orchestrator | 2025-04-13 00:10:05.879256 | orchestrator | PLAY [Create operator user] **************************************************** 2025-04-13 00:10:05.879272 | orchestrator | 2025-04-13 00:10:05.879287 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-13 00:10:05.879319 | orchestrator | ok: [testbed-manager] 2025-04-13 00:10:05.922098 | orchestrator | 2025-04-13 00:10:05.922187 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-04-13 00:10:05.922210 | orchestrator | ok: [testbed-manager] 2025-04-13 00:10:05.982908 | orchestrator | 2025-04-13 00:10:05.982983 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-04-13 00:10:05.983000 | orchestrator | ok: [testbed-manager] 2025-04-13 00:10:06.794901 | orchestrator | 2025-04-13 00:10:06.794980 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-04-13 00:10:06.795012 | orchestrator | changed: [testbed-manager] 2025-04-13 00:10:07.508475 | orchestrator | 2025-04-13 00:10:07.508570 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-04-13 00:10:07.508597 | orchestrator | changed: [testbed-manager] 2025-04-13 00:10:08.884730 | orchestrator | 2025-04-13 00:10:08.884838 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-04-13 00:10:08.884875 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-04-13 00:10:10.290272 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-04-13 00:10:10.290386 | orchestrator | 2025-04-13 00:10:10.290406 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-04-13 00:10:10.290463 | orchestrator | changed: [testbed-manager] 2025-04-13 00:10:12.136486 | orchestrator | 2025-04-13 00:10:12.136591 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-04-13 00:10:12.136625 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-04-13 00:10:12.745653 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-04-13 00:10:12.745783 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-04-13 00:10:12.745818 | orchestrator | 2025-04-13 00:10:12.745847 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-04-13 00:10:12.745889 | orchestrator | changed: [testbed-manager] 2025-04-13 00:10:12.827675 | orchestrator | 2025-04-13 00:10:12.827791 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-04-13 00:10:12.827826 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:10:13.717672 | orchestrator | 2025-04-13 00:10:13.717762 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-04-13 00:10:13.717789 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-13 00:10:13.756093 | orchestrator | changed: [testbed-manager] 2025-04-13 00:10:13.756201 | orchestrator | 2025-04-13 00:10:13.756221 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-04-13 00:10:13.756254 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:10:13.785144 | orchestrator | 2025-04-13 00:10:13.785255 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-04-13 00:10:13.785293 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:10:13.825030 | orchestrator | 2025-04-13 00:10:13.825131 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-04-13 00:10:13.825163 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:10:13.879784 | orchestrator | 2025-04-13 00:10:13.879882 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-04-13 00:10:13.879915 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:10:14.721187 | orchestrator | 2025-04-13 00:10:14.721235 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-04-13 00:10:14.721251 | orchestrator | ok: [testbed-manager] 2025-04-13 00:10:16.232138 | orchestrator | 2025-04-13 00:10:16.232267 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-04-13 00:10:16.232289 | orchestrator | 2025-04-13 00:10:16.232305 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-13 00:10:16.232336 | orchestrator | ok: [testbed-manager] 2025-04-13 00:10:17.231228 | orchestrator | 2025-04-13 00:10:17.231292 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-04-13 00:10:17.231315 | orchestrator | changed: [testbed-manager] 2025-04-13 00:10:17.345206 | orchestrator | 2025-04-13 00:10:17.345472 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:10:17.345489 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-04-13 00:10:17.345495 | orchestrator | 2025-04-13 00:10:17.685983 | orchestrator | changed 2025-04-13 00:10:17.702274 | 2025-04-13 00:10:17.702395 | TASK [Point out that the log in on the manager is now possible] 2025-04-13 00:10:17.754020 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-04-13 00:10:17.764847 | 2025-04-13 00:10:17.764955 | TASK [Point out that the following task takes some time and does not give any output] 2025-04-13 00:10:17.814263 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-04-13 00:10:17.825274 | 2025-04-13 00:10:17.825394 | TASK [Run manager part 1 + 2] 2025-04-13 00:10:18.670751 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-04-13 00:10:18.731554 | orchestrator | 2025-04-13 00:10:21.290521 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-04-13 00:10:21.290735 | orchestrator | 2025-04-13 00:10:21.290795 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-13 00:10:21.290838 | orchestrator | ok: [testbed-manager] 2025-04-13 00:10:21.331741 | orchestrator | 2025-04-13 00:10:21.331863 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-04-13 00:10:21.331908 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:10:21.371601 | orchestrator | 2025-04-13 00:10:21.371700 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-04-13 00:10:21.371737 | orchestrator | ok: [testbed-manager] 2025-04-13 00:10:21.408760 | orchestrator | 2025-04-13 00:10:21.408881 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-13 00:10:21.408920 | orchestrator | ok: [testbed-manager] 2025-04-13 00:10:21.479620 | orchestrator | 2025-04-13 00:10:21.479708 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-13 00:10:21.479740 | orchestrator | ok: [testbed-manager] 2025-04-13 00:10:21.537649 | orchestrator | 2025-04-13 00:10:21.537742 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-13 00:10:21.537774 | orchestrator | ok: [testbed-manager] 2025-04-13 00:10:21.587541 | orchestrator | 2025-04-13 00:10:21.587641 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-13 00:10:21.587674 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-04-13 00:10:22.329996 | orchestrator | 2025-04-13 00:10:22.330136 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-13 00:10:22.330174 | orchestrator | ok: [testbed-manager] 2025-04-13 00:10:22.379087 | orchestrator | 2025-04-13 00:10:22.379214 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-13 00:10:22.379267 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:10:23.794398 | orchestrator | 2025-04-13 00:10:23.794561 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-13 00:10:23.794629 | orchestrator | changed: [testbed-manager] 2025-04-13 00:10:24.369351 | orchestrator | 2025-04-13 00:10:24.369493 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-13 00:10:24.369531 | orchestrator | ok: [testbed-manager] 2025-04-13 00:10:25.518953 | orchestrator | 2025-04-13 00:10:25.519061 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-13 00:10:25.519099 | orchestrator | changed: [testbed-manager] 2025-04-13 00:10:38.294344 | orchestrator | 2025-04-13 00:10:38.294488 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-13 00:10:38.294525 | orchestrator | changed: [testbed-manager] 2025-04-13 00:10:38.972071 | orchestrator | 2025-04-13 00:10:38.972175 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-04-13 00:10:38.972209 | orchestrator | ok: [testbed-manager] 2025-04-13 00:10:39.027233 | orchestrator | 2025-04-13 00:10:39.027330 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-04-13 00:10:39.027361 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:10:40.035969 | orchestrator | 2025-04-13 00:10:40.036039 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-04-13 00:10:40.036058 | orchestrator | changed: [testbed-manager] 2025-04-13 00:10:41.012848 | orchestrator | 2025-04-13 00:10:41.012908 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-04-13 00:10:41.012928 | orchestrator | changed: [testbed-manager] 2025-04-13 00:10:41.601843 | orchestrator | 2025-04-13 00:10:41.601894 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-04-13 00:10:41.601911 | orchestrator | changed: [testbed-manager] 2025-04-13 00:10:41.665977 | orchestrator | 2025-04-13 00:10:41.666133 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-04-13 00:10:41.666174 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-04-13 00:10:44.013420 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-04-13 00:10:44.013504 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-04-13 00:10:44.013516 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-04-13 00:10:44.013533 | orchestrator | changed: [testbed-manager] 2025-04-13 00:10:53.207709 | orchestrator | 2025-04-13 00:10:53.207765 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-04-13 00:10:53.207782 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-04-13 00:10:54.279528 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-04-13 00:10:54.279651 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-04-13 00:10:54.279672 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-04-13 00:10:54.279689 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-04-13 00:10:54.279703 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-04-13 00:10:54.279717 | orchestrator | 2025-04-13 00:10:54.279732 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-04-13 00:10:54.279785 | orchestrator | changed: [testbed-manager] 2025-04-13 00:10:54.323404 | orchestrator | 2025-04-13 00:10:54.323536 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-04-13 00:10:54.323575 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:10:57.494715 | orchestrator | 2025-04-13 00:10:57.494838 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-04-13 00:10:57.494877 | orchestrator | changed: [testbed-manager] 2025-04-13 00:10:57.536850 | orchestrator | 2025-04-13 00:10:57.536936 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-04-13 00:10:57.536969 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:12:33.821530 | orchestrator | 2025-04-13 00:12:33.821649 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-04-13 00:12:33.821685 | orchestrator | changed: [testbed-manager] 2025-04-13 00:12:34.927827 | orchestrator | 2025-04-13 00:12:34.927938 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-13 00:12:34.927976 | orchestrator | ok: [testbed-manager] 2025-04-13 00:12:35.033889 | orchestrator | 2025-04-13 00:12:35.034008 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:12:35.034244 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-04-13 00:12:35.034282 | orchestrator | 2025-04-13 00:12:35.457937 | orchestrator | changed 2025-04-13 00:12:35.477989 | 2025-04-13 00:12:35.478132 | TASK [Reboot manager] 2025-04-13 00:12:37.021691 | orchestrator | changed 2025-04-13 00:12:37.039301 | 2025-04-13 00:12:37.039493 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-04-13 00:12:53.469260 | orchestrator | ok 2025-04-13 00:12:53.481308 | 2025-04-13 00:12:53.481429 | TASK [Wait a little longer for the manager so that everything is ready] 2025-04-13 00:13:53.529320 | orchestrator | ok 2025-04-13 00:13:53.541552 | 2025-04-13 00:13:53.541678 | TASK [Deploy manager + bootstrap nodes] 2025-04-13 00:13:56.094942 | orchestrator | 2025-04-13 00:13:56.098633 | orchestrator | # DEPLOY MANAGER 2025-04-13 00:13:56.098784 | orchestrator | 2025-04-13 00:13:56.098811 | orchestrator | + set -e 2025-04-13 00:13:56.098858 | orchestrator | + echo 2025-04-13 00:13:56.098877 | orchestrator | + echo '# DEPLOY MANAGER' 2025-04-13 00:13:56.098894 | orchestrator | + echo 2025-04-13 00:13:56.098919 | orchestrator | + cat /opt/manager-vars.sh 2025-04-13 00:13:56.098962 | orchestrator | export NUMBER_OF_NODES=6 2025-04-13 00:13:56.099948 | orchestrator | 2025-04-13 00:13:56.099976 | orchestrator | export CEPH_VERSION=quincy 2025-04-13 00:13:56.099993 | orchestrator | export CONFIGURATION_VERSION=main 2025-04-13 00:13:56.100009 | orchestrator | export MANAGER_VERSION=8.1.0 2025-04-13 00:13:56.100024 | orchestrator | export OPENSTACK_VERSION=2024.1 2025-04-13 00:13:56.100056 | orchestrator | 2025-04-13 00:13:56.100072 | orchestrator | export ARA=false 2025-04-13 00:13:56.100087 | orchestrator | export TEMPEST=false 2025-04-13 00:13:56.100102 | orchestrator | export IS_ZUUL=true 2025-04-13 00:13:56.100116 | orchestrator | 2025-04-13 00:13:56.100130 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.13 2025-04-13 00:13:56.100145 | orchestrator | export EXTERNAL_API=false 2025-04-13 00:13:56.100159 | orchestrator | 2025-04-13 00:13:56.100172 | orchestrator | export IMAGE_USER=ubuntu 2025-04-13 00:13:56.100186 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-04-13 00:13:56.100202 | orchestrator | 2025-04-13 00:13:56.100215 | orchestrator | export CEPH_STACK=ceph-ansible 2025-04-13 00:13:56.100229 | orchestrator | 2025-04-13 00:13:56.100243 | orchestrator | + echo 2025-04-13 00:13:56.100257 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-13 00:13:56.100278 | orchestrator | ++ export INTERACTIVE=false 2025-04-13 00:13:56.100400 | orchestrator | ++ INTERACTIVE=false 2025-04-13 00:13:56.100418 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-13 00:13:56.100440 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-13 00:13:56.100459 | orchestrator | + source /opt/manager-vars.sh 2025-04-13 00:13:56.100550 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-13 00:13:56.100567 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-13 00:13:56.100624 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-13 00:13:56.100641 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-13 00:13:56.100678 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-13 00:13:56.100694 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-13 00:13:56.100728 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-04-13 00:13:56.100743 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-04-13 00:13:56.100757 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-13 00:13:56.100798 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-13 00:13:56.100817 | orchestrator | ++ export ARA=false 2025-04-13 00:13:56.100855 | orchestrator | ++ ARA=false 2025-04-13 00:13:56.100888 | orchestrator | ++ export TEMPEST=false 2025-04-13 00:13:56.100904 | orchestrator | ++ TEMPEST=false 2025-04-13 00:13:56.100928 | orchestrator | ++ export IS_ZUUL=true 2025-04-13 00:13:56.100943 | orchestrator | ++ IS_ZUUL=true 2025-04-13 00:13:56.100978 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.13 2025-04-13 00:13:56.100993 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.13 2025-04-13 00:13:56.101018 | orchestrator | ++ export EXTERNAL_API=false 2025-04-13 00:13:56.163902 | orchestrator | ++ EXTERNAL_API=false 2025-04-13 00:13:56.164026 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-13 00:13:56.164064 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-13 00:13:56.164080 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-13 00:13:56.164095 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-13 00:13:56.164120 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-13 00:13:56.164136 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-13 00:13:56.164151 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-04-13 00:13:56.164195 | orchestrator | + docker version 2025-04-13 00:13:56.455828 | orchestrator | Client: Docker Engine - Community 2025-04-13 00:13:56.459270 | orchestrator | Version: 26.1.4 2025-04-13 00:13:56.459369 | orchestrator | API version: 1.45 2025-04-13 00:13:56.459385 | orchestrator | Go version: go1.21.11 2025-04-13 00:13:56.459397 | orchestrator | Git commit: 5650f9b 2025-04-13 00:13:56.459408 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-04-13 00:13:56.459420 | orchestrator | OS/Arch: linux/amd64 2025-04-13 00:13:56.459431 | orchestrator | Context: default 2025-04-13 00:13:56.459442 | orchestrator | 2025-04-13 00:13:56.459454 | orchestrator | Server: Docker Engine - Community 2025-04-13 00:13:56.459465 | orchestrator | Engine: 2025-04-13 00:13:56.459477 | orchestrator | Version: 26.1.4 2025-04-13 00:13:56.459488 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-04-13 00:13:56.459499 | orchestrator | Go version: go1.21.11 2025-04-13 00:13:56.459512 | orchestrator | Git commit: de5c9cf 2025-04-13 00:13:56.459553 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-04-13 00:13:56.459565 | orchestrator | OS/Arch: linux/amd64 2025-04-13 00:13:56.459571 | orchestrator | Experimental: false 2025-04-13 00:13:56.459578 | orchestrator | containerd: 2025-04-13 00:13:56.459584 | orchestrator | Version: 1.7.27 2025-04-13 00:13:56.459590 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-04-13 00:13:56.459597 | orchestrator | runc: 2025-04-13 00:13:56.459604 | orchestrator | Version: 1.2.5 2025-04-13 00:13:56.459610 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-04-13 00:13:56.459616 | orchestrator | docker-init: 2025-04-13 00:13:56.459622 | orchestrator | Version: 0.19.0 2025-04-13 00:13:56.459628 | orchestrator | GitCommit: de40ad0 2025-04-13 00:13:56.459645 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-04-13 00:13:56.470295 | orchestrator | + set -e 2025-04-13 00:13:56.470338 | orchestrator | + source /opt/manager-vars.sh 2025-04-13 00:13:56.470387 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-13 00:13:56.470394 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-13 00:13:56.470402 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-13 00:13:56.470411 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-13 00:13:56.470421 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-13 00:13:56.470432 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-13 00:13:56.470449 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-04-13 00:13:56.470459 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-04-13 00:13:56.470470 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-13 00:13:56.470487 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-13 00:13:56.470497 | orchestrator | ++ export ARA=false 2025-04-13 00:13:56.470503 | orchestrator | ++ ARA=false 2025-04-13 00:13:56.470509 | orchestrator | ++ export TEMPEST=false 2025-04-13 00:13:56.470515 | orchestrator | ++ TEMPEST=false 2025-04-13 00:13:56.470529 | orchestrator | ++ export IS_ZUUL=true 2025-04-13 00:13:56.470581 | orchestrator | ++ IS_ZUUL=true 2025-04-13 00:13:56.470590 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.13 2025-04-13 00:13:56.470596 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.13 2025-04-13 00:13:56.470602 | orchestrator | ++ export EXTERNAL_API=false 2025-04-13 00:13:56.470619 | orchestrator | ++ EXTERNAL_API=false 2025-04-13 00:13:56.470626 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-13 00:13:56.470632 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-13 00:13:56.470641 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-13 00:13:56.470650 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-13 00:13:56.470656 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-13 00:13:56.470662 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-13 00:13:56.470668 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-13 00:13:56.470673 | orchestrator | ++ export INTERACTIVE=false 2025-04-13 00:13:56.470679 | orchestrator | ++ INTERACTIVE=false 2025-04-13 00:13:56.470687 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-13 00:13:56.470909 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-13 00:13:56.470921 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-13 00:13:56.478405 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-04-13 00:13:56.478453 | orchestrator | + set -e 2025-04-13 00:13:56.485726 | orchestrator | + VERSION=8.1.0 2025-04-13 00:13:56.485772 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-04-13 00:13:56.485799 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-13 00:13:56.489618 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-04-13 00:13:56.489651 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-04-13 00:13:56.493459 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-04-13 00:13:56.501895 | orchestrator | /opt/configuration ~ 2025-04-13 00:13:56.505104 | orchestrator | + set -e 2025-04-13 00:13:56.505167 | orchestrator | + pushd /opt/configuration 2025-04-13 00:13:56.505186 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-13 00:13:56.505214 | orchestrator | + source /opt/venv/bin/activate 2025-04-13 00:13:56.506150 | orchestrator | ++ deactivate nondestructive 2025-04-13 00:13:56.506245 | orchestrator | ++ '[' -n '' ']' 2025-04-13 00:13:56.506282 | orchestrator | ++ '[' -n '' ']' 2025-04-13 00:13:56.506298 | orchestrator | ++ hash -r 2025-04-13 00:13:56.506319 | orchestrator | ++ '[' -n '' ']' 2025-04-13 00:13:56.506334 | orchestrator | ++ unset VIRTUAL_ENV 2025-04-13 00:13:56.506349 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-04-13 00:13:56.506363 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-04-13 00:13:56.506430 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-04-13 00:13:56.506446 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-04-13 00:13:56.506460 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-04-13 00:13:56.506599 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-04-13 00:13:56.506619 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-13 00:13:56.506634 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-13 00:13:56.506648 | orchestrator | ++ export PATH 2025-04-13 00:13:56.506668 | orchestrator | ++ '[' -n '' ']' 2025-04-13 00:13:57.789655 | orchestrator | ++ '[' -z '' ']' 2025-04-13 00:13:57.789774 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-04-13 00:13:57.789791 | orchestrator | ++ PS1='(venv) ' 2025-04-13 00:13:57.789802 | orchestrator | ++ export PS1 2025-04-13 00:13:57.789813 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-04-13 00:13:57.789824 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-04-13 00:13:57.789835 | orchestrator | ++ hash -r 2025-04-13 00:13:57.789846 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-04-13 00:13:57.789875 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-04-13 00:13:57.790177 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-04-13 00:13:57.791591 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-04-13 00:13:57.792799 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-04-13 00:13:57.794128 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (24.2) 2025-04-13 00:13:57.804205 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.1.8) 2025-04-13 00:13:57.805521 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-04-13 00:13:57.806370 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-04-13 00:13:57.807843 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-04-13 00:13:57.840777 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.1) 2025-04-13 00:13:57.842266 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-04-13 00:13:57.843712 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-04-13 00:13:57.845246 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.1.31) 2025-04-13 00:13:57.849347 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-04-13 00:13:58.060190 | orchestrator | ++ which gilt 2025-04-13 00:13:58.064301 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-04-13 00:13:58.304627 | orchestrator | + /opt/venv/bin/gilt overlay 2025-04-13 00:13:58.304801 | orchestrator | osism.cfg-generics: 2025-04-13 00:13:59.907227 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-04-13 00:13:59.907388 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-04-13 00:13:59.907869 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-04-13 00:13:59.907903 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-04-13 00:13:59.907922 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-04-13 00:14:00.907967 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-04-13 00:14:00.919497 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-04-13 00:14:01.236486 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-04-13 00:14:01.291738 | orchestrator | ~ 2025-04-13 00:14:01.292935 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-13 00:14:01.292970 | orchestrator | + deactivate 2025-04-13 00:14:01.293006 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-04-13 00:14:01.293024 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-13 00:14:01.293062 | orchestrator | + export PATH 2025-04-13 00:14:01.293076 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-04-13 00:14:01.293090 | orchestrator | + '[' -n '' ']' 2025-04-13 00:14:01.293104 | orchestrator | + hash -r 2025-04-13 00:14:01.293118 | orchestrator | + '[' -n '' ']' 2025-04-13 00:14:01.293132 | orchestrator | + unset VIRTUAL_ENV 2025-04-13 00:14:01.293146 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-04-13 00:14:01.293161 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-04-13 00:14:01.293177 | orchestrator | + unset -f deactivate 2025-04-13 00:14:01.293191 | orchestrator | + popd 2025-04-13 00:14:01.293213 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-04-13 00:14:01.293967 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-04-13 00:14:01.293997 | orchestrator | ++ semver 8.1.0 7.0.0 2025-04-13 00:14:01.347025 | orchestrator | + [[ 1 -ge 0 ]] 2025-04-13 00:14:01.378359 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-04-13 00:14:01.378545 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-04-13 00:14:01.378597 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-13 00:14:01.378626 | orchestrator | + source /opt/venv/bin/activate 2025-04-13 00:14:01.378652 | orchestrator | ++ deactivate nondestructive 2025-04-13 00:14:01.378669 | orchestrator | ++ '[' -n '' ']' 2025-04-13 00:14:01.378684 | orchestrator | ++ '[' -n '' ']' 2025-04-13 00:14:01.378698 | orchestrator | ++ hash -r 2025-04-13 00:14:01.378746 | orchestrator | ++ '[' -n '' ']' 2025-04-13 00:14:01.378875 | orchestrator | ++ unset VIRTUAL_ENV 2025-04-13 00:14:01.378929 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-04-13 00:14:01.378956 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-04-13 00:14:01.378997 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-04-13 00:14:01.379135 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-04-13 00:14:01.379157 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-04-13 00:14:01.379172 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-04-13 00:14:01.379186 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-13 00:14:01.379201 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-13 00:14:01.379215 | orchestrator | ++ export PATH 2025-04-13 00:14:01.379229 | orchestrator | ++ '[' -n '' ']' 2025-04-13 00:14:01.379243 | orchestrator | ++ '[' -z '' ']' 2025-04-13 00:14:01.379257 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-04-13 00:14:01.379272 | orchestrator | ++ PS1='(venv) ' 2025-04-13 00:14:01.379314 | orchestrator | ++ export PS1 2025-04-13 00:14:01.379416 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-04-13 00:14:01.379432 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-04-13 00:14:01.379450 | orchestrator | ++ hash -r 2025-04-13 00:14:01.379468 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-04-13 00:14:02.570391 | orchestrator | 2025-04-13 00:14:03.159267 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-04-13 00:14:03.159394 | orchestrator | 2025-04-13 00:14:03.159417 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-13 00:14:03.159465 | orchestrator | ok: [testbed-manager] 2025-04-13 00:14:04.203064 | orchestrator | 2025-04-13 00:14:04.203211 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-04-13 00:14:04.203250 | orchestrator | changed: [testbed-manager] 2025-04-13 00:14:06.679490 | orchestrator | 2025-04-13 00:14:06.679659 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-04-13 00:14:06.679693 | orchestrator | 2025-04-13 00:14:06.679719 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-13 00:14:06.679758 | orchestrator | ok: [testbed-manager] 2025-04-13 00:14:11.774729 | orchestrator | 2025-04-13 00:14:11.774871 | orchestrator | TASK [Pull images] ************************************************************* 2025-04-13 00:14:11.774939 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-04-13 00:15:27.643276 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/mariadb:11.6.2) 2025-04-13 00:15:27.643418 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-04-13 00:15:27.643440 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-04-13 00:15:27.643455 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-04-13 00:15:27.643470 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/redis:7.4.1-alpine) 2025-04-13 00:15:27.643485 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-04-13 00:15:27.643499 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-04-13 00:15:27.643513 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-04-13 00:15:27.643535 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/postgres:16.6-alpine) 2025-04-13 00:15:27.643550 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/traefik:v3.2.1) 2025-04-13 00:15:27.643564 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/hashicorp/vault:1.18.2) 2025-04-13 00:15:27.643578 | orchestrator | 2025-04-13 00:15:27.643592 | orchestrator | TASK [Check status] ************************************************************ 2025-04-13 00:15:27.643625 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-13 00:15:27.696850 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-04-13 00:15:27.696996 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-04-13 00:15:27.697015 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-04-13 00:15:27.697031 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j132007095501.1586', 'results_file': '/home/dragon/.ansible_async/j132007095501.1586', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-04-13 00:15:27.697065 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j975461292691.1611', 'results_file': '/home/dragon/.ansible_async/j975461292691.1611', 'changed': True, 'item': 'index.docker.io/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-04-13 00:15:27.697080 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-13 00:15:27.697094 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-04-13 00:15:27.697108 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j34591424090.1636', 'results_file': '/home/dragon/.ansible_async/j34591424090.1636', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-13 00:15:27.697130 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j260935905032.1668', 'results_file': '/home/dragon/.ansible_async/j260935905032.1668', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-13 00:15:27.697148 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-13 00:15:27.697163 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j68891138510.1700', 'results_file': '/home/dragon/.ansible_async/j68891138510.1700', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-13 00:15:27.697177 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j998300399521.1732', 'results_file': '/home/dragon/.ansible_async/j998300399521.1732', 'changed': True, 'item': 'index.docker.io/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-04-13 00:15:27.697191 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j851540907622.1764', 'results_file': '/home/dragon/.ansible_async/j851540907622.1764', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-04-13 00:15:27.697265 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j398195309464.1804', 'results_file': '/home/dragon/.ansible_async/j398195309464.1804', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-13 00:15:27.697281 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j610431221870.1831', 'results_file': '/home/dragon/.ansible_async/j610431221870.1831', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-04-13 00:15:27.697295 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j536712478088.1864', 'results_file': '/home/dragon/.ansible_async/j536712478088.1864', 'changed': True, 'item': 'index.docker.io/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-04-13 00:15:27.697310 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j677789824576.1898', 'results_file': '/home/dragon/.ansible_async/j677789824576.1898', 'changed': True, 'item': 'index.docker.io/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-04-13 00:15:27.697324 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j125747870922.1931', 'results_file': '/home/dragon/.ansible_async/j125747870922.1931', 'changed': True, 'item': 'index.docker.io/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-04-13 00:15:27.697338 | orchestrator | 2025-04-13 00:15:27.697353 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-04-13 00:15:27.697383 | orchestrator | ok: [testbed-manager] 2025-04-13 00:15:28.173826 | orchestrator | 2025-04-13 00:15:28.173950 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-04-13 00:15:28.173971 | orchestrator | changed: [testbed-manager] 2025-04-13 00:15:28.523195 | orchestrator | 2025-04-13 00:15:28.523317 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-04-13 00:15:28.523353 | orchestrator | changed: [testbed-manager] 2025-04-13 00:15:28.875879 | orchestrator | 2025-04-13 00:15:28.876068 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-04-13 00:15:28.876106 | orchestrator | changed: [testbed-manager] 2025-04-13 00:15:28.933288 | orchestrator | 2025-04-13 00:15:28.933442 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-04-13 00:15:28.933479 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:15:29.273157 | orchestrator | 2025-04-13 00:15:29.273280 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-04-13 00:15:29.273318 | orchestrator | ok: [testbed-manager] 2025-04-13 00:15:29.446959 | orchestrator | 2025-04-13 00:15:29.447148 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-04-13 00:15:29.447194 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:15:31.287711 | orchestrator | 2025-04-13 00:15:31.287942 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-04-13 00:15:31.287981 | orchestrator | 2025-04-13 00:15:31.288000 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-13 00:15:31.288039 | orchestrator | ok: [testbed-manager] 2025-04-13 00:15:31.505952 | orchestrator | 2025-04-13 00:15:31.506242 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-04-13 00:15:31.506281 | orchestrator | 2025-04-13 00:15:31.605370 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-04-13 00:15:31.605532 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-04-13 00:15:32.795967 | orchestrator | 2025-04-13 00:15:32.796149 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-04-13 00:15:32.796229 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-04-13 00:15:34.698298 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-04-13 00:15:34.698460 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-04-13 00:15:34.698482 | orchestrator | 2025-04-13 00:15:34.698497 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-04-13 00:15:34.698535 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-04-13 00:15:35.385163 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-04-13 00:15:35.385317 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-04-13 00:15:35.385337 | orchestrator | 2025-04-13 00:15:35.385353 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-04-13 00:15:35.385390 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-13 00:15:36.083706 | orchestrator | changed: [testbed-manager] 2025-04-13 00:15:36.083835 | orchestrator | 2025-04-13 00:15:36.083848 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-04-13 00:15:36.083931 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-13 00:15:36.165414 | orchestrator | changed: [testbed-manager] 2025-04-13 00:15:36.165550 | orchestrator | 2025-04-13 00:15:36.165569 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-04-13 00:15:36.165603 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:15:36.544730 | orchestrator | 2025-04-13 00:15:36.544966 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-04-13 00:15:36.545011 | orchestrator | ok: [testbed-manager] 2025-04-13 00:15:36.663680 | orchestrator | 2025-04-13 00:15:36.663789 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-04-13 00:15:36.663812 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-04-13 00:15:37.758107 | orchestrator | 2025-04-13 00:15:37.758268 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-04-13 00:15:37.758307 | orchestrator | changed: [testbed-manager] 2025-04-13 00:15:38.615034 | orchestrator | 2025-04-13 00:15:38.615213 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-04-13 00:15:38.615255 | orchestrator | changed: [testbed-manager] 2025-04-13 00:15:42.020923 | orchestrator | 2025-04-13 00:15:42.021052 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-04-13 00:15:42.021091 | orchestrator | changed: [testbed-manager] 2025-04-13 00:15:42.389441 | orchestrator | 2025-04-13 00:15:42.389586 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-04-13 00:15:42.389622 | orchestrator | 2025-04-13 00:15:42.497208 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-04-13 00:15:42.497386 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-04-13 00:15:45.147473 | orchestrator | 2025-04-13 00:15:45.147615 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-04-13 00:15:45.147655 | orchestrator | ok: [testbed-manager] 2025-04-13 00:15:45.282177 | orchestrator | 2025-04-13 00:15:45.282330 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-04-13 00:15:45.282373 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-04-13 00:15:46.457106 | orchestrator | 2025-04-13 00:15:46.457235 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-04-13 00:15:46.457268 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-04-13 00:15:46.554484 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-04-13 00:15:46.554599 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-04-13 00:15:46.554617 | orchestrator | 2025-04-13 00:15:46.554632 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-04-13 00:15:46.554663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-04-13 00:15:47.220007 | orchestrator | 2025-04-13 00:15:47.220134 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-04-13 00:15:47.220171 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-04-13 00:15:47.881939 | orchestrator | 2025-04-13 00:15:47.882118 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-04-13 00:15:47.882170 | orchestrator | changed: [testbed-manager] 2025-04-13 00:15:48.575386 | orchestrator | 2025-04-13 00:15:48.575518 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-04-13 00:15:48.575556 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-13 00:15:48.977787 | orchestrator | changed: [testbed-manager] 2025-04-13 00:15:48.977958 | orchestrator | 2025-04-13 00:15:48.977980 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-04-13 00:15:48.978129 | orchestrator | changed: [testbed-manager] 2025-04-13 00:15:49.339936 | orchestrator | 2025-04-13 00:15:49.340063 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-04-13 00:15:49.340100 | orchestrator | ok: [testbed-manager] 2025-04-13 00:15:49.404154 | orchestrator | 2025-04-13 00:15:49.404270 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-04-13 00:15:49.404301 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:15:50.049134 | orchestrator | 2025-04-13 00:15:50.049285 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-04-13 00:15:50.049324 | orchestrator | changed: [testbed-manager] 2025-04-13 00:15:50.169049 | orchestrator | 2025-04-13 00:15:50.169192 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-04-13 00:15:50.169230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-04-13 00:15:50.962203 | orchestrator | 2025-04-13 00:15:50.962336 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-04-13 00:15:50.962375 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-04-13 00:15:51.657928 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-04-13 00:15:51.658086 | orchestrator | 2025-04-13 00:15:51.658105 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-04-13 00:15:51.658135 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-04-13 00:15:52.349051 | orchestrator | 2025-04-13 00:15:52.349181 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-04-13 00:15:52.349219 | orchestrator | changed: [testbed-manager] 2025-04-13 00:15:52.403950 | orchestrator | 2025-04-13 00:15:52.404055 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-04-13 00:15:52.404081 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:15:53.034450 | orchestrator | 2025-04-13 00:15:53.034574 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-04-13 00:15:53.034610 | orchestrator | changed: [testbed-manager] 2025-04-13 00:15:54.861181 | orchestrator | 2025-04-13 00:15:54.861318 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-04-13 00:15:54.861358 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-13 00:16:00.878511 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-13 00:16:00.878650 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-13 00:16:00.878671 | orchestrator | changed: [testbed-manager] 2025-04-13 00:16:00.878688 | orchestrator | 2025-04-13 00:16:00.878704 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-04-13 00:16:00.878736 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-04-13 00:16:01.610818 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-04-13 00:16:01.611021 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-04-13 00:16:01.611044 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-04-13 00:16:01.611060 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-04-13 00:16:01.611076 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-04-13 00:16:01.611090 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-04-13 00:16:01.611104 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-04-13 00:16:01.611145 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-04-13 00:16:01.611159 | orchestrator | changed: [testbed-manager] => (item=users) 2025-04-13 00:16:01.611174 | orchestrator | 2025-04-13 00:16:01.611190 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-04-13 00:16:01.611222 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-04-13 00:16:01.805440 | orchestrator | 2025-04-13 00:16:01.805583 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-04-13 00:16:01.805632 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-04-13 00:16:02.520247 | orchestrator | 2025-04-13 00:16:02.520371 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-04-13 00:16:02.520407 | orchestrator | changed: [testbed-manager] 2025-04-13 00:16:03.188435 | orchestrator | 2025-04-13 00:16:03.188579 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-04-13 00:16:03.188632 | orchestrator | ok: [testbed-manager] 2025-04-13 00:16:03.981173 | orchestrator | 2025-04-13 00:16:03.981308 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-04-13 00:16:03.981346 | orchestrator | changed: [testbed-manager] 2025-04-13 00:16:09.666702 | orchestrator | 2025-04-13 00:16:09.666901 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-04-13 00:16:09.666962 | orchestrator | changed: [testbed-manager] 2025-04-13 00:16:10.627972 | orchestrator | 2025-04-13 00:16:10.628097 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-04-13 00:16:10.628135 | orchestrator | ok: [testbed-manager] 2025-04-13 00:16:32.832606 | orchestrator | 2025-04-13 00:16:32.832770 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-04-13 00:16:32.832846 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-04-13 00:16:32.914944 | orchestrator | ok: [testbed-manager] 2025-04-13 00:16:32.915063 | orchestrator | 2025-04-13 00:16:32.915082 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-04-13 00:16:32.915113 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:16:32.983160 | orchestrator | 2025-04-13 00:16:32.983274 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-04-13 00:16:32.983292 | orchestrator | 2025-04-13 00:16:32.983308 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-04-13 00:16:32.983338 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:16:33.083387 | orchestrator | 2025-04-13 00:16:33.083508 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-04-13 00:16:33.083543 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-04-13 00:16:33.920054 | orchestrator | 2025-04-13 00:16:33.920173 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-04-13 00:16:33.920209 | orchestrator | ok: [testbed-manager] 2025-04-13 00:16:34.015779 | orchestrator | 2025-04-13 00:16:34.015939 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-04-13 00:16:34.015986 | orchestrator | ok: [testbed-manager] 2025-04-13 00:16:34.090110 | orchestrator | 2025-04-13 00:16:34.090264 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-04-13 00:16:34.090318 | orchestrator | ok: [testbed-manager] => { 2025-04-13 00:16:34.748864 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-04-13 00:16:34.748985 | orchestrator | } 2025-04-13 00:16:34.749002 | orchestrator | 2025-04-13 00:16:34.749016 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-04-13 00:16:34.749044 | orchestrator | ok: [testbed-manager] 2025-04-13 00:16:35.669901 | orchestrator | 2025-04-13 00:16:35.670077 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-04-13 00:16:35.670121 | orchestrator | ok: [testbed-manager] 2025-04-13 00:16:35.762461 | orchestrator | 2025-04-13 00:16:35.762601 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-04-13 00:16:35.762691 | orchestrator | ok: [testbed-manager] 2025-04-13 00:16:35.830075 | orchestrator | 2025-04-13 00:16:35.830177 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-04-13 00:16:35.830216 | orchestrator | ok: [testbed-manager] => { 2025-04-13 00:16:35.897578 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-04-13 00:16:35.897721 | orchestrator | } 2025-04-13 00:16:35.897753 | orchestrator | 2025-04-13 00:16:35.897777 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-04-13 00:16:35.897844 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:16:35.976683 | orchestrator | 2025-04-13 00:16:35.976907 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-04-13 00:16:35.976953 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:16:36.047066 | orchestrator | 2025-04-13 00:16:36.047187 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-04-13 00:16:36.047238 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:16:36.135929 | orchestrator | 2025-04-13 00:16:36.136048 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-04-13 00:16:36.136084 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:16:36.217061 | orchestrator | 2025-04-13 00:16:36.217173 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-04-13 00:16:36.217207 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:16:36.294118 | orchestrator | 2025-04-13 00:16:36.294226 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-04-13 00:16:36.294260 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:16:37.531307 | orchestrator | 2025-04-13 00:16:37.531449 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-04-13 00:16:37.531490 | orchestrator | changed: [testbed-manager] 2025-04-13 00:16:37.653332 | orchestrator | 2025-04-13 00:16:37.653477 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-04-13 00:16:37.653529 | orchestrator | ok: [testbed-manager] 2025-04-13 00:17:37.737912 | orchestrator | 2025-04-13 00:17:37.738108 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-04-13 00:17:37.738152 | orchestrator | Pausing for 60 seconds 2025-04-13 00:17:37.835231 | orchestrator | changed: [testbed-manager] 2025-04-13 00:17:37.835348 | orchestrator | 2025-04-13 00:17:37.835379 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-04-13 00:17:37.835427 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-04-13 00:21:50.114269 | orchestrator | 2025-04-13 00:21:50.114413 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-04-13 00:21:50.114454 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-04-13 00:21:53.263288 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-04-13 00:21:53.263418 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-04-13 00:21:53.263438 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-04-13 00:21:53.263453 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-04-13 00:21:53.263468 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-04-13 00:21:53.263535 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-04-13 00:21:53.263554 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-04-13 00:21:53.263569 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-04-13 00:21:53.263583 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-04-13 00:21:53.263597 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-04-13 00:21:53.263642 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-04-13 00:21:53.263657 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-04-13 00:21:53.263671 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-04-13 00:21:53.263685 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-04-13 00:21:53.263699 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-04-13 00:21:53.263713 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-04-13 00:21:53.263727 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-04-13 00:21:53.263741 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-04-13 00:21:53.263768 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-04-13 00:21:53.263782 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-04-13 00:21:53.263796 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-04-13 00:21:53.263810 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (38 retries left). 2025-04-13 00:21:53.263827 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (37 retries left). 2025-04-13 00:21:53.263842 | orchestrator | changed: [testbed-manager] 2025-04-13 00:21:53.263859 | orchestrator | 2025-04-13 00:21:53.263876 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-04-13 00:21:53.263892 | orchestrator | 2025-04-13 00:21:53.263907 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-13 00:21:53.263940 | orchestrator | ok: [testbed-manager] 2025-04-13 00:21:53.385321 | orchestrator | 2025-04-13 00:21:53.385438 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-04-13 00:21:53.385474 | orchestrator | 2025-04-13 00:21:53.451304 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-04-13 00:21:53.451425 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-04-13 00:21:55.301057 | orchestrator | 2025-04-13 00:21:55.301176 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-04-13 00:21:55.301206 | orchestrator | ok: [testbed-manager] 2025-04-13 00:21:55.362209 | orchestrator | 2025-04-13 00:21:55.362294 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-04-13 00:21:55.362313 | orchestrator | ok: [testbed-manager] 2025-04-13 00:21:55.460972 | orchestrator | 2025-04-13 00:21:55.461058 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-04-13 00:21:55.461080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-04-13 00:21:58.415628 | orchestrator | 2025-04-13 00:21:58.415727 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-04-13 00:21:58.415755 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-04-13 00:21:59.119772 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-04-13 00:21:59.119917 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-04-13 00:21:59.119949 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-04-13 00:21:59.119974 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-04-13 00:21:59.119998 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-04-13 00:21:59.120025 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-04-13 00:21:59.120048 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-04-13 00:21:59.120111 | orchestrator | 2025-04-13 00:21:59.120130 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-04-13 00:21:59.120162 | orchestrator | changed: [testbed-manager] 2025-04-13 00:21:59.217872 | orchestrator | 2025-04-13 00:21:59.217973 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-04-13 00:21:59.218006 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-04-13 00:22:00.435565 | orchestrator | 2025-04-13 00:22:00.435655 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-04-13 00:22:00.435680 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-04-13 00:22:01.075162 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-04-13 00:22:01.075287 | orchestrator | 2025-04-13 00:22:01.075308 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-04-13 00:22:01.075341 | orchestrator | changed: [testbed-manager] 2025-04-13 00:22:01.145686 | orchestrator | 2025-04-13 00:22:01.145840 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-04-13 00:22:01.145892 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:22:01.211644 | orchestrator | 2025-04-13 00:22:01.211786 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-04-13 00:22:01.211825 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-04-13 00:22:02.622895 | orchestrator | 2025-04-13 00:22:02.623054 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-04-13 00:22:02.623095 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-13 00:22:03.283354 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-13 00:22:03.283507 | orchestrator | changed: [testbed-manager] 2025-04-13 00:22:03.283532 | orchestrator | 2025-04-13 00:22:03.283549 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-04-13 00:22:03.283581 | orchestrator | changed: [testbed-manager] 2025-04-13 00:22:03.378160 | orchestrator | 2025-04-13 00:22:03.378268 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-04-13 00:22:03.378303 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-04-13 00:22:04.007927 | orchestrator | 2025-04-13 00:22:04.008053 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-04-13 00:22:04.008090 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-13 00:22:04.642320 | orchestrator | changed: [testbed-manager] 2025-04-13 00:22:04.642445 | orchestrator | 2025-04-13 00:22:04.642465 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-04-13 00:22:04.642557 | orchestrator | changed: [testbed-manager] 2025-04-13 00:22:04.758686 | orchestrator | 2025-04-13 00:22:04.758763 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-04-13 00:22:04.758793 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-04-13 00:22:05.273943 | orchestrator | 2025-04-13 00:22:05.274097 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-04-13 00:22:05.274138 | orchestrator | changed: [testbed-manager] 2025-04-13 00:22:05.686263 | orchestrator | 2025-04-13 00:22:05.686436 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-04-13 00:22:05.686547 | orchestrator | changed: [testbed-manager] 2025-04-13 00:22:06.922000 | orchestrator | 2025-04-13 00:22:06.922182 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-04-13 00:22:06.922219 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-04-13 00:22:07.591930 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-04-13 00:22:07.592056 | orchestrator | 2025-04-13 00:22:07.592078 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-04-13 00:22:07.592110 | orchestrator | changed: [testbed-manager] 2025-04-13 00:22:08.008547 | orchestrator | 2025-04-13 00:22:08.008671 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-04-13 00:22:08.008740 | orchestrator | ok: [testbed-manager] 2025-04-13 00:22:08.360545 | orchestrator | 2025-04-13 00:22:08.360671 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-04-13 00:22:08.360710 | orchestrator | changed: [testbed-manager] 2025-04-13 00:22:08.412295 | orchestrator | 2025-04-13 00:22:08.412406 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-04-13 00:22:08.412441 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:22:08.532267 | orchestrator | 2025-04-13 00:22:08.532382 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-04-13 00:22:08.532416 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-04-13 00:22:08.581291 | orchestrator | 2025-04-13 00:22:08.581431 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-04-13 00:22:08.581467 | orchestrator | ok: [testbed-manager] 2025-04-13 00:22:10.677028 | orchestrator | 2025-04-13 00:22:10.677152 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-04-13 00:22:10.677190 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-04-13 00:22:11.427430 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-04-13 00:22:11.427613 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-04-13 00:22:11.427635 | orchestrator | 2025-04-13 00:22:11.427651 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-04-13 00:22:11.427684 | orchestrator | changed: [testbed-manager] 2025-04-13 00:22:12.200133 | orchestrator | 2025-04-13 00:22:12.200244 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-04-13 00:22:12.200271 | orchestrator | changed: [testbed-manager] 2025-04-13 00:22:12.279292 | orchestrator | 2025-04-13 00:22:12.279423 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-04-13 00:22:12.279464 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-04-13 00:22:12.334325 | orchestrator | 2025-04-13 00:22:12.334440 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-04-13 00:22:12.334516 | orchestrator | ok: [testbed-manager] 2025-04-13 00:22:13.284248 | orchestrator | 2025-04-13 00:22:13.284368 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-04-13 00:22:13.284406 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-04-13 00:22:13.378597 | orchestrator | 2025-04-13 00:22:13.378719 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-04-13 00:22:13.378760 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-04-13 00:22:14.090638 | orchestrator | 2025-04-13 00:22:14.090758 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-04-13 00:22:14.090794 | orchestrator | changed: [testbed-manager] 2025-04-13 00:22:14.752659 | orchestrator | 2025-04-13 00:22:14.752784 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-04-13 00:22:14.752821 | orchestrator | ok: [testbed-manager] 2025-04-13 00:22:14.814960 | orchestrator | 2025-04-13 00:22:14.815045 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-04-13 00:22:14.815075 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:22:14.871456 | orchestrator | 2025-04-13 00:22:14.871607 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-04-13 00:22:14.871642 | orchestrator | ok: [testbed-manager] 2025-04-13 00:22:15.708741 | orchestrator | 2025-04-13 00:22:15.708866 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-04-13 00:22:15.708905 | orchestrator | changed: [testbed-manager] 2025-04-13 00:22:56.376591 | orchestrator | 2025-04-13 00:22:56.376733 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-04-13 00:22:56.376775 | orchestrator | changed: [testbed-manager] 2025-04-13 00:22:57.066819 | orchestrator | 2025-04-13 00:22:57.066972 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-04-13 00:22:57.067018 | orchestrator | ok: [testbed-manager] 2025-04-13 00:22:59.734994 | orchestrator | 2025-04-13 00:22:59.735125 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-04-13 00:22:59.735166 | orchestrator | changed: [testbed-manager] 2025-04-13 00:22:59.785945 | orchestrator | 2025-04-13 00:22:59.786124 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-04-13 00:22:59.786162 | orchestrator | ok: [testbed-manager] 2025-04-13 00:22:59.854428 | orchestrator | 2025-04-13 00:22:59.854585 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-04-13 00:22:59.854603 | orchestrator | 2025-04-13 00:22:59.854617 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-04-13 00:22:59.854648 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:23:59.916339 | orchestrator | 2025-04-13 00:23:59.916453 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-04-13 00:23:59.916481 | orchestrator | Pausing for 60 seconds 2025-04-13 00:24:05.420582 | orchestrator | changed: [testbed-manager] 2025-04-13 00:24:05.420738 | orchestrator | 2025-04-13 00:24:05.420771 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-04-13 00:24:05.420806 | orchestrator | changed: [testbed-manager] 2025-04-13 00:24:47.081219 | orchestrator | 2025-04-13 00:24:47.081360 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-04-13 00:24:47.081402 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-04-13 00:24:53.182109 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-04-13 00:24:53.182253 | orchestrator | changed: [testbed-manager] 2025-04-13 00:24:53.182275 | orchestrator | 2025-04-13 00:24:53.182289 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-04-13 00:24:53.182329 | orchestrator | changed: [testbed-manager] 2025-04-13 00:24:53.315389 | orchestrator | 2025-04-13 00:24:53.315519 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-04-13 00:24:53.315616 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-04-13 00:24:53.383783 | orchestrator | 2025-04-13 00:24:53.383895 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-04-13 00:24:53.383913 | orchestrator | 2025-04-13 00:24:53.383928 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-04-13 00:24:53.383958 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:24:53.521012 | orchestrator | 2025-04-13 00:24:53.521126 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:24:53.521145 | orchestrator | testbed-manager : ok=105 changed=57 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-04-13 00:24:53.521161 | orchestrator | 2025-04-13 00:24:53.521192 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-13 00:24:53.521478 | orchestrator | + deactivate 2025-04-13 00:24:53.521504 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-04-13 00:24:53.521519 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-13 00:24:53.521563 | orchestrator | + export PATH 2025-04-13 00:24:53.521578 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-04-13 00:24:53.521592 | orchestrator | + '[' -n '' ']' 2025-04-13 00:24:53.521607 | orchestrator | + hash -r 2025-04-13 00:24:53.521621 | orchestrator | + '[' -n '' ']' 2025-04-13 00:24:53.521634 | orchestrator | + unset VIRTUAL_ENV 2025-04-13 00:24:53.521648 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-04-13 00:24:53.521669 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-04-13 00:24:53.529755 | orchestrator | + unset -f deactivate 2025-04-13 00:24:53.529790 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-04-13 00:24:53.529812 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-04-13 00:24:53.530077 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-04-13 00:24:53.530193 | orchestrator | + local max_attempts=60 2025-04-13 00:24:53.530213 | orchestrator | + local name=ceph-ansible 2025-04-13 00:24:53.530246 | orchestrator | + local attempt_num=1 2025-04-13 00:24:53.531453 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-04-13 00:24:53.566701 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-13 00:24:53.566970 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-04-13 00:24:53.567002 | orchestrator | + local max_attempts=60 2025-04-13 00:24:53.567018 | orchestrator | + local name=kolla-ansible 2025-04-13 00:24:53.567032 | orchestrator | + local attempt_num=1 2025-04-13 00:24:53.567052 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-04-13 00:24:53.593151 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-13 00:24:53.593876 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-04-13 00:24:53.593912 | orchestrator | + local max_attempts=60 2025-04-13 00:24:53.593931 | orchestrator | + local name=osism-ansible 2025-04-13 00:24:53.593949 | orchestrator | + local attempt_num=1 2025-04-13 00:24:53.593974 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-04-13 00:24:53.631337 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-13 00:24:54.354304 | orchestrator | + [[ true == \t\r\u\e ]] 2025-04-13 00:24:54.354425 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-04-13 00:24:54.354466 | orchestrator | ++ semver 8.1.0 9.0.0 2025-04-13 00:24:54.406772 | orchestrator | + [[ -1 -ge 0 ]] 2025-04-13 00:24:54.642467 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-04-13 00:24:54.642593 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-04-13 00:24:54.642620 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-04-13 00:24:54.651923 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-04-13 00:24:54.651995 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-04-13 00:24:54.652004 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-04-13 00:24:54.652031 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-04-13 00:24:54.652041 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-04-13 00:24:54.652053 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-04-13 00:24:54.652062 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-04-13 00:24:54.652070 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 49 seconds (healthy) 2025-04-13 00:24:54.652079 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-04-13 00:24:54.652087 | orchestrator | manager-mariadb-1 index.docker.io/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-04-13 00:24:54.652095 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-04-13 00:24:54.652103 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-04-13 00:24:54.652137 | orchestrator | manager-redis-1 index.docker.io/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-04-13 00:24:54.652145 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-04-13 00:24:54.652153 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-04-13 00:24:54.652162 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-04-13 00:24:54.652170 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-04-13 00:24:54.652189 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-04-13 00:24:54.810749 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-04-13 00:24:54.816157 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 8 minutes ago Up 7 minutes (healthy) 2025-04-13 00:24:54.816195 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 8 minutes ago Up 3 minutes (healthy) 2025-04-13 00:24:54.816206 | orchestrator | netbox-postgres-1 index.docker.io/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 8 minutes ago Up 8 minutes (healthy) 5432/tcp 2025-04-13 00:24:54.816216 | orchestrator | netbox-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 8 minutes ago Up 8 minutes (healthy) 6379/tcp 2025-04-13 00:24:54.816230 | orchestrator | ++ semver 8.1.0 7.0.0 2025-04-13 00:24:54.870278 | orchestrator | + [[ 1 -ge 0 ]] 2025-04-13 00:24:54.874772 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-04-13 00:24:54.874815 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-04-13 00:24:56.480066 | orchestrator | 2025-04-13 00:24:56 | INFO  | Task b7926d2e-fd5a-49af-844a-0f9289ac1113 (resolvconf) was prepared for execution. 2025-04-13 00:24:59.541878 | orchestrator | 2025-04-13 00:24:56 | INFO  | It takes a moment until task b7926d2e-fd5a-49af-844a-0f9289ac1113 (resolvconf) has been started and output is visible here. 2025-04-13 00:24:59.542007 | orchestrator | 2025-04-13 00:24:59.542166 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-04-13 00:24:59.542185 | orchestrator | 2025-04-13 00:24:59.542627 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-13 00:24:59.544136 | orchestrator | Sunday 13 April 2025 00:24:59 +0000 (0:00:00.088) 0:00:00.088 ********** 2025-04-13 00:25:03.685766 | orchestrator | ok: [testbed-manager] 2025-04-13 00:25:03.686697 | orchestrator | 2025-04-13 00:25:03.686887 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-04-13 00:25:03.761111 | orchestrator | Sunday 13 April 2025 00:25:03 +0000 (0:00:04.147) 0:00:04.236 ********** 2025-04-13 00:25:03.761232 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:25:03.856300 | orchestrator | 2025-04-13 00:25:03.856510 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-04-13 00:25:03.856576 | orchestrator | Sunday 13 April 2025 00:25:03 +0000 (0:00:00.075) 0:00:04.312 ********** 2025-04-13 00:25:03.856609 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-04-13 00:25:03.856913 | orchestrator | 2025-04-13 00:25:03.856976 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-04-13 00:25:03.857199 | orchestrator | Sunday 13 April 2025 00:25:03 +0000 (0:00:00.093) 0:00:04.405 ********** 2025-04-13 00:25:03.935372 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-04-13 00:25:03.936066 | orchestrator | 2025-04-13 00:25:03.936115 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-04-13 00:25:03.936464 | orchestrator | Sunday 13 April 2025 00:25:03 +0000 (0:00:00.080) 0:00:04.486 ********** 2025-04-13 00:25:04.897958 | orchestrator | ok: [testbed-manager] 2025-04-13 00:25:04.899243 | orchestrator | 2025-04-13 00:25:04.899905 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-04-13 00:25:04.900148 | orchestrator | Sunday 13 April 2025 00:25:04 +0000 (0:00:00.961) 0:00:05.447 ********** 2025-04-13 00:25:04.936084 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:25:04.936265 | orchestrator | 2025-04-13 00:25:04.937176 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-04-13 00:25:04.938583 | orchestrator | Sunday 13 April 2025 00:25:04 +0000 (0:00:00.041) 0:00:05.488 ********** 2025-04-13 00:25:05.367340 | orchestrator | ok: [testbed-manager] 2025-04-13 00:25:05.367983 | orchestrator | 2025-04-13 00:25:05.368424 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-04-13 00:25:05.369480 | orchestrator | Sunday 13 April 2025 00:25:05 +0000 (0:00:00.430) 0:00:05.919 ********** 2025-04-13 00:25:05.426525 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:25:05.952690 | orchestrator | 2025-04-13 00:25:05.952786 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-04-13 00:25:05.952797 | orchestrator | Sunday 13 April 2025 00:25:05 +0000 (0:00:00.056) 0:00:05.975 ********** 2025-04-13 00:25:05.952817 | orchestrator | changed: [testbed-manager] 2025-04-13 00:25:05.952958 | orchestrator | 2025-04-13 00:25:05.953607 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-04-13 00:25:05.954206 | orchestrator | Sunday 13 April 2025 00:25:05 +0000 (0:00:00.527) 0:00:06.503 ********** 2025-04-13 00:25:06.977807 | orchestrator | changed: [testbed-manager] 2025-04-13 00:25:06.978750 | orchestrator | 2025-04-13 00:25:06.978827 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-04-13 00:25:06.979917 | orchestrator | Sunday 13 April 2025 00:25:06 +0000 (0:00:01.025) 0:00:07.528 ********** 2025-04-13 00:25:07.873954 | orchestrator | ok: [testbed-manager] 2025-04-13 00:25:07.874210 | orchestrator | 2025-04-13 00:25:07.874843 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-04-13 00:25:07.875282 | orchestrator | Sunday 13 April 2025 00:25:07 +0000 (0:00:00.896) 0:00:08.425 ********** 2025-04-13 00:25:07.955801 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-04-13 00:25:07.956306 | orchestrator | 2025-04-13 00:25:07.956993 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-04-13 00:25:07.959755 | orchestrator | Sunday 13 April 2025 00:25:07 +0000 (0:00:00.082) 0:00:08.508 ********** 2025-04-13 00:25:09.132902 | orchestrator | changed: [testbed-manager] 2025-04-13 00:25:09.133733 | orchestrator | 2025-04-13 00:25:09.133784 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:25:09.134198 | orchestrator | 2025-04-13 00:25:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-13 00:25:09.135234 | orchestrator | 2025-04-13 00:25:09 | INFO  | Please wait and do not abort execution. 2025-04-13 00:25:09.135270 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-13 00:25:09.136379 | orchestrator | 2025-04-13 00:25:09.137498 | orchestrator | Sunday 13 April 2025 00:25:09 +0000 (0:00:01.175) 0:00:09.683 ********** 2025-04-13 00:25:09.137870 | orchestrator | =============================================================================== 2025-04-13 00:25:09.138655 | orchestrator | Gathering Facts --------------------------------------------------------- 4.15s 2025-04-13 00:25:09.139176 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.18s 2025-04-13 00:25:09.139713 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.03s 2025-04-13 00:25:09.141168 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.96s 2025-04-13 00:25:09.141878 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.90s 2025-04-13 00:25:09.142656 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.53s 2025-04-13 00:25:09.143575 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.43s 2025-04-13 00:25:09.144200 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-04-13 00:25:09.145043 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-04-13 00:25:09.145874 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-04-13 00:25:09.147365 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.08s 2025-04-13 00:25:09.149516 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.06s 2025-04-13 00:25:09.150218 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.04s 2025-04-13 00:25:09.561773 | orchestrator | + osism apply sshconfig 2025-04-13 00:25:11.025230 | orchestrator | 2025-04-13 00:25:11 | INFO  | Task 93d1f5d1-b8d3-481e-9504-cb8e7b5e1dcb (sshconfig) was prepared for execution. 2025-04-13 00:25:14.128963 | orchestrator | 2025-04-13 00:25:11 | INFO  | It takes a moment until task 93d1f5d1-b8d3-481e-9504-cb8e7b5e1dcb (sshconfig) has been started and output is visible here. 2025-04-13 00:25:14.129103 | orchestrator | 2025-04-13 00:25:14.130226 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-04-13 00:25:14.130446 | orchestrator | 2025-04-13 00:25:14.723564 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-04-13 00:25:14.723808 | orchestrator | Sunday 13 April 2025 00:25:14 +0000 (0:00:00.114) 0:00:00.114 ********** 2025-04-13 00:25:14.723847 | orchestrator | ok: [testbed-manager] 2025-04-13 00:25:14.725660 | orchestrator | 2025-04-13 00:25:14.725703 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-04-13 00:25:15.216805 | orchestrator | Sunday 13 April 2025 00:25:14 +0000 (0:00:00.596) 0:00:00.710 ********** 2025-04-13 00:25:15.216929 | orchestrator | changed: [testbed-manager] 2025-04-13 00:25:15.217809 | orchestrator | 2025-04-13 00:25:15.217917 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-04-13 00:25:15.218592 | orchestrator | Sunday 13 April 2025 00:25:15 +0000 (0:00:00.492) 0:00:01.203 ********** 2025-04-13 00:25:21.002536 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-04-13 00:25:21.003247 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-04-13 00:25:21.003620 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-04-13 00:25:21.004499 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-04-13 00:25:21.006660 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-04-13 00:25:21.009752 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-04-13 00:25:21.010435 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-04-13 00:25:21.011158 | orchestrator | 2025-04-13 00:25:21.012241 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-04-13 00:25:21.012814 | orchestrator | Sunday 13 April 2025 00:25:20 +0000 (0:00:05.784) 0:00:06.988 ********** 2025-04-13 00:25:21.084811 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:25:21.086270 | orchestrator | 2025-04-13 00:25:21.086314 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-04-13 00:25:21.650889 | orchestrator | Sunday 13 April 2025 00:25:21 +0000 (0:00:00.083) 0:00:07.072 ********** 2025-04-13 00:25:21.651027 | orchestrator | changed: [testbed-manager] 2025-04-13 00:25:21.652015 | orchestrator | 2025-04-13 00:25:21.652055 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:25:21.652149 | orchestrator | 2025-04-13 00:25:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-13 00:25:21.652168 | orchestrator | 2025-04-13 00:25:21 | INFO  | Please wait and do not abort execution. 2025-04-13 00:25:21.652188 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-13 00:25:21.652799 | orchestrator | 2025-04-13 00:25:21.653112 | orchestrator | Sunday 13 April 2025 00:25:21 +0000 (0:00:00.566) 0:00:07.638 ********** 2025-04-13 00:25:21.653472 | orchestrator | =============================================================================== 2025-04-13 00:25:21.653815 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.78s 2025-04-13 00:25:21.654162 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.60s 2025-04-13 00:25:21.654493 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.57s 2025-04-13 00:25:21.654965 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2025-04-13 00:25:21.655239 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-04-13 00:25:22.060998 | orchestrator | + osism apply known-hosts 2025-04-13 00:25:23.512815 | orchestrator | 2025-04-13 00:25:23 | INFO  | Task d7e12165-8cec-4a1e-8a76-8c54f5a6ad29 (known-hosts) was prepared for execution. 2025-04-13 00:25:26.539251 | orchestrator | 2025-04-13 00:25:23 | INFO  | It takes a moment until task d7e12165-8cec-4a1e-8a76-8c54f5a6ad29 (known-hosts) has been started and output is visible here. 2025-04-13 00:25:26.539398 | orchestrator | 2025-04-13 00:25:26.539973 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-04-13 00:25:26.540676 | orchestrator | 2025-04-13 00:25:26.541770 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-04-13 00:25:26.542258 | orchestrator | Sunday 13 April 2025 00:25:26 +0000 (0:00:00.107) 0:00:00.107 ********** 2025-04-13 00:25:32.559677 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-04-13 00:25:32.560197 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-04-13 00:25:32.560883 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-04-13 00:25:32.561825 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-04-13 00:25:32.562644 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-04-13 00:25:32.563745 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-04-13 00:25:32.564210 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-04-13 00:25:32.567376 | orchestrator | 2025-04-13 00:25:32.568106 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-04-13 00:25:32.568755 | orchestrator | Sunday 13 April 2025 00:25:32 +0000 (0:00:06.022) 0:00:06.130 ********** 2025-04-13 00:25:32.737326 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-04-13 00:25:32.740173 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-04-13 00:25:32.740596 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-04-13 00:25:32.740636 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-04-13 00:25:32.740867 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-04-13 00:25:32.741544 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-04-13 00:25:32.742090 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-04-13 00:25:32.742126 | orchestrator | 2025-04-13 00:25:32.742442 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-13 00:25:32.742816 | orchestrator | Sunday 13 April 2025 00:25:32 +0000 (0:00:00.177) 0:00:06.307 ********** 2025-04-13 00:25:33.990638 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQUURBaqIPTrsriKcyb28QPNDMZ13467XgGIVdCIuoKGLZipG+XXqq+dSDIqtR/SrK2as+aXvzCUzMhNEmX9WPJTw2EDZoWglLalfS6ghOoEsDWO8DXjqxdz8b0MwM8iQMhK1uv/4Gg3PBh6XPuIVAbkVF8EA7T4N6zNZ5a7/8JF4S6GIJc24hWKwGXm/mcz7oY4sDQfjOxe6LNjJrUifszk96IHtLpHQt3T9jNkaOxVDH0CQHDiQ9eg6tt6peuIlb64LSS+TrRi8Bl6Yt7Kpl0cVz4HQTJJ/gZW8aQe1vnAKzZuRJEwnepuQCqgoNRsAt1LhSuQdwvjvMV2xpmyRtY96bbVsPaqOIIjD2qp2gUSm5bxnsgD/Yf8x3iusHat9CG8hZf0Rj9D2QYwTv6sW6/0ebrUTwIq3OKUhlesWJDY33gKGMDQ45Esn+xbdH3SjrP0QiXJzMaNHbsX84cjGVwisY1Sk7BGcUxkVUb1YvkpOfZIUCPP3grtDicSYy5DE=) 2025-04-13 00:25:33.991400 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAJS9tIL2veHo5rITPamzL9G/saC3G4oZfOrgozKEHqL1WwjyGUk1SVK5JcYmDrhqik/09xSRQ+9/pZiohi5mlU=) 2025-04-13 00:25:33.991450 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII+uLnqd5bhmTe9HIIyaf7DKev3c6XVtjfwZK8iyg9e1) 2025-04-13 00:25:33.991707 | orchestrator | 2025-04-13 00:25:33.991740 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-13 00:25:33.992073 | orchestrator | Sunday 13 April 2025 00:25:33 +0000 (0:00:01.253) 0:00:07.560 ********** 2025-04-13 00:25:35.047221 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQ+qXKPyvAugeeceLOVbazoLst2ajw5HT4qCzkwrwC08BU9w+fXO9JMw3Dt2WCLnj6pgXkqIUKm3+YA8QoHcV2Aw7DpkOsKeRCE/anNMzuhMkjKq0CaBSmkOYgNLOsYzUx/JUsiAOcMIn6JQ3z19HWhzlEIj8r3Yl+EDFQypUN5kOciLlbmoPHQJ61kBk/URqgBLCug4F378w3EFWF8i01/MFhRGlb0U7iMYUb4fiWlJBelYxjAvfHyAVBfGYqdgJ8aS64fjjCHFLxiVHJ0aLQGHpiot1yhlGr4S/mvR52JynfjeG2jPh0ymO6BIcaMD2AaVjSoTtBfIx3jCqG4mAsOsBm1vdxlwWi2G4Dhus5oCVeL+MUovPrGfNb3573MV9URKTJRQ6DHAml1V+Wlu77QvRy7lmXu/2IHHDXx9DuLyr2+YZeoIwQCAff7ga/fAWXZyDgEDxdO2XIXNuWUnO49GlV2048DCf6AGCTZW3BQomVMYDuX1Xloq/G6PN3a4M=) 2025-04-13 00:25:35.047427 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEoVqIRZHmqZqCpUpM7c3WLvsx6oaWJUd9vrpJz7sZ6Ws2Tw+iBIy8zteoxDmaAXWk6jSkiIV43zPt/fwxFvxt8=) 2025-04-13 00:25:35.048443 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHSg2stmmbW4cM0cSuHtXoEN3v8gqC35W+EGvEjKclyL) 2025-04-13 00:25:35.049153 | orchestrator | 2025-04-13 00:25:35.050290 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-13 00:25:35.050328 | orchestrator | Sunday 13 April 2025 00:25:35 +0000 (0:00:01.055) 0:00:08.616 ********** 2025-04-13 00:25:36.132641 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID0b1+Y3QnImwZ9tYKBiXTNWCuYeXLwlJeC//13vrb9k) 2025-04-13 00:25:36.133319 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvPlXRDe7tR5V3YT9mso0ElYbnZVdw5u5sxH9vgOgIziTNzVUs5QB/VBwCPRRlGm8Z7tkc6wuxv178RqlGJH9eUeFZetVmNwmUBv4SKwJYSYgnR4bjMq8bzO70RMrEDgFMAlufaSbT8yymjbiQQ9cEu4yNuwwRdYvxsdQRK7B9OVFNJvnBe0uZrNg6SynNtgrGoOAv9b58ElA6CVWaV6aZitImroWzrSUpeLelpMqTjeT3iGxoSCO1C0LvBZJq2SU0X3OUImN5pOMGX3jACdfunduwvTT/FccjezG0Ize27snK8QM1N1murNIiBkOrW8Nh4a40HNW9RRQLUzH7KD2El1XBAnIZByIErqxRFZXsSU9tiQNJoqAGsajzIbXCwakDJxJbeCiCpCGmNP+1zYx+Ru8VjkmXKvlS65eXoIhHVNVQmHolTsHabbpnXqZT7ZyavtEQPJN30h5DRnsg3UKFzTFWKfta3qLCsgIVkxJxy613UlXfFV3Dr63IWeVY2Fk=) 2025-04-13 00:25:36.134129 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOHhIrqDeRLgj+zfyPXDyFUChkpZo7wq11hE/eVhjAtdIFSq6Q/Y/7XhsDLOgTLB00YWJ2mHSPd4RyY8Vxob4tg=) 2025-04-13 00:25:36.134870 | orchestrator | 2025-04-13 00:25:36.135445 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-13 00:25:36.136309 | orchestrator | Sunday 13 April 2025 00:25:36 +0000 (0:00:01.086) 0:00:09.703 ********** 2025-04-13 00:25:37.215854 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC28zvMl3A0p4cUDnwBGpYrUSLuGDxkogNZ8jpRRa3iT6s1uE8LVe3O7pZxg56CsmAr52FGEhBIjQjALBYdZaLGoLfZ9SF8klcdWj7Pp5mVn9jo/P6i/MNRJMlf5dlSWiDlTJtRWcUefMJT7wTHRstX8kKwZ+6Re6nmZLfEMvni5pSjXAcu9k/42/j56T3rbmTBXIY4krjDaxuAVcDhV7NIPXiS9CNZ1+u6PA47aQl6VEbldbLWLlM6saM12DKndQ1JLyzHt4LVAHF+oeSpFpEqu97PM+CGL6UXlu/j9LONOOI9VOi5RJdtXjQIxVdMgc2lhMND8KDd+xAC7cI47TtFqFfmLrvq8/1RWqGDgU94A5xfW8vBJV6l4CoN7bmbhnPNSmZo7+g5A807LB62KK7lbYf49Jxxju6rfI3wafRxJbbg7xqfQ9Tjar52mlADEWCnLJdusOcgTon/ItMQTHYzURXqxlr63AAiPr2onyOLaZhbtSKI9G+hY+KvRQ6LfFM=) 2025-04-13 00:25:37.216810 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAeASxxrURZDLR0Oa+ic9n2ExTljIOXPEqpP3Ca3KgIwNEZKvbvp5Y6pw2lxz92kq1duxUm9ib2lFGaIrAsBops=) 2025-04-13 00:25:37.217107 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKhYUoQppWlu0DHqWE3cJynn9xtw28pn/S3xZN01nnTc) 2025-04-13 00:25:37.217915 | orchestrator | 2025-04-13 00:25:37.217949 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-13 00:25:37.218405 | orchestrator | Sunday 13 April 2025 00:25:37 +0000 (0:00:01.081) 0:00:10.784 ********** 2025-04-13 00:25:38.345093 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGZHwidUAOFliSf6huhRKznGoa/ksv8Po1qJpEUYXiRyrZxj9nlCZdx8CUvoKsF9p5+ms3wyfNwkBhgaVJUom0I=) 2025-04-13 00:25:38.345482 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHmA8jW5Oie7LnlNH6dwW+8jHzqNRkcjDPhj/0SSJieF) 2025-04-13 00:25:38.346479 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrrPM+6dfKzIaOwhNaIfc7oVVaPuGW2X+ZDHsFISrsB57XkCx7BJ/10jZqcfmisFzk1vx8Q0H56Qernn2ib2sLqZ9bWZbmWM6kWyN1IWcTmoALqd5mvjph9rTIGT9pjNEau7wPGc92cvm5ivn64CUiBFVavwQgU0Y615lvXoxuZe5h5gRurnuTsZonGhmh47I9VSkI8LdWH5K6dlDrLaRjMYXugqwrSeIHS6V+qtZTkzqwPal4eR/3QkRUPNhHBmnyj5Qx/BoitIGNDktg0kk72fUt7h8C1RykQxeMHA2fxIWc+Aquczb1DN3XMT+/kpPcmBu81upUmio3c3Z67ah+c0hiXQwAHSwsvkCm+j1OKY4u/kisaHJWtc9MXFOc8YFnGb37vRxsYFJjGUWHmhfKa6SGOE49mjQv8b2Y15nqGpVyTlKalQwryqOhK9mxQF0b/YGyq6V11Szw3muqyJ1P2/Jf/Qy00NBnqk/ggkbkD6JRs1ZiH+lWkVRLiNFHoiE=) 2025-04-13 00:25:38.346734 | orchestrator | 2025-04-13 00:25:38.349002 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-13 00:25:38.349410 | orchestrator | Sunday 13 April 2025 00:25:38 +0000 (0:00:01.129) 0:00:11.913 ********** 2025-04-13 00:25:39.405826 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdmu6Zo1rG6ZPFA3Y3TB+72Jd+zojebmxpa2TlcyMAQ9IhwXRAP9hfG7uhjMAIBHlInxTHTwn3aPPJDlos52V0ILK8Vh0bHBHXjkSI3z3LlO/y9ZhADdzW4AhpR/aw+2T+uXZpDiw0QIVzXcCLxOwpEh0YzhSzpjciQ+W6MKvEQ6JVJUb85LLy8B3JxKUYYBpSnAugEWmonJ66w3G5VyqJ5QkE5baFrFx8GaLVY4v2jXiZ3gArCH2ymPwBoUG0VwyD4GQYHBu3rnBKxLX4eGbECtUDoMrBLkHLICLWboEbcCiIA8JT1gBlVjrfB1HNwL34oKV9OR5RUQHA2ZpBmaVkH9fvfqUcuX+9/4gYwbkwel6GFcnFNxKdPbW0tmXyB0tPXLncQblkldBxerXPvcqAsExFWYbnATMZahLMOHjd83oaUmxeg613acmrNDyhi2+vdN/A80pWG+2Nlu6zwiVghUcHMkU2uoU/KN2ltQNJUfzmIEu7hmHByyzuWXz8Y+M=) 2025-04-13 00:25:39.406226 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJomMmv3sBKtjIt4rRS8eUWGKDlRg8cO3yaKSdCyZ+09bMknp7SoLddOvoJSnbN8WXUw6NoEzLe46SXR4/Cj3FU=) 2025-04-13 00:25:39.406285 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBxIWvz4eZCMZwQel+Qql/L1myWZh8couFrHCGRhOADp) 2025-04-13 00:25:39.407131 | orchestrator | 2025-04-13 00:25:39.407675 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-13 00:25:39.407830 | orchestrator | Sunday 13 April 2025 00:25:39 +0000 (0:00:01.059) 0:00:12.972 ********** 2025-04-13 00:25:40.482533 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEebO+zl4ArXFZPl3WfjcFzKtI47emzXhh/HMnjQO/yBpu/ZoEZxM70Vks3xAr9C8Q1rxSQDnuYDBh8PREHXRcbDjMs2mWfuInlpwBn5XvqDoFkRhEytOgQKIPeshsTny9Zjfk05/p1Cc6mkfUBYfSWmzUGTt+QLKecnr95xZCch8bK4TWzMPwO4Y60LlkEHOX1c7Sob4+56sxiGx7jhXUIEVADjra7jCV9c87MyKlPwKsYsTpHJwx3Ga3LuvN/Y3LStkWb/FxPd6dH8Pt53kM8F0/uJbilGNH/Q+Qbc49kaFWbG2Sckexh3TGmaJ+oMLczSQjP7olL0EWZyDmT1iRPA7/BsImFLROSW1TfQ0ZtTEqwuyh5KyMs2U4s0ihgmrnocvaWhNKMKcimfe9Xil/Y7iz2P9u6Sffx8SYMRyYMcVXZqJ8hkwrj5Vk3Fih0MT3BYSkCnM4vE0EPszDGeNtsPv29M+BPHfQBVDGKdw2o5XYJZn6yNBVpZpEBiQJfR8=) 2025-04-13 00:25:40.483188 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIJmpRTVlGOlS+EN+7oSuXKlZP8XopqlyZYsrec4wYXGpb9SBdO1TqwxAxN4nGJXvD9S7RgFPz9y+upR3hTBBsI=) 2025-04-13 00:25:40.483236 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO8fSiRGCxHh58z3HwP0UQWzXRo22grtAFc3/g2II33p) 2025-04-13 00:25:40.485842 | orchestrator | 2025-04-13 00:25:40.486359 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-04-13 00:25:40.486788 | orchestrator | Sunday 13 April 2025 00:25:40 +0000 (0:00:01.079) 0:00:14.052 ********** 2025-04-13 00:25:45.817006 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-04-13 00:25:45.817913 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-04-13 00:25:45.817959 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-04-13 00:25:45.818631 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-04-13 00:25:45.819962 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-04-13 00:25:45.820479 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-04-13 00:25:45.821059 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-04-13 00:25:45.821101 | orchestrator | 2025-04-13 00:25:45.821418 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-04-13 00:25:45.821914 | orchestrator | Sunday 13 April 2025 00:25:45 +0000 (0:00:05.334) 0:00:19.387 ********** 2025-04-13 00:25:45.982846 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-04-13 00:25:45.983101 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-04-13 00:25:45.984080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-04-13 00:25:45.984685 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-04-13 00:25:45.985407 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-04-13 00:25:45.985969 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-04-13 00:25:45.986382 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-04-13 00:25:45.986757 | orchestrator | 2025-04-13 00:25:45.987220 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-13 00:25:45.987592 | orchestrator | Sunday 13 April 2025 00:25:45 +0000 (0:00:00.166) 0:00:19.553 ********** 2025-04-13 00:25:47.073816 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQUURBaqIPTrsriKcyb28QPNDMZ13467XgGIVdCIuoKGLZipG+XXqq+dSDIqtR/SrK2as+aXvzCUzMhNEmX9WPJTw2EDZoWglLalfS6ghOoEsDWO8DXjqxdz8b0MwM8iQMhK1uv/4Gg3PBh6XPuIVAbkVF8EA7T4N6zNZ5a7/8JF4S6GIJc24hWKwGXm/mcz7oY4sDQfjOxe6LNjJrUifszk96IHtLpHQt3T9jNkaOxVDH0CQHDiQ9eg6tt6peuIlb64LSS+TrRi8Bl6Yt7Kpl0cVz4HQTJJ/gZW8aQe1vnAKzZuRJEwnepuQCqgoNRsAt1LhSuQdwvjvMV2xpmyRtY96bbVsPaqOIIjD2qp2gUSm5bxnsgD/Yf8x3iusHat9CG8hZf0Rj9D2QYwTv6sW6/0ebrUTwIq3OKUhlesWJDY33gKGMDQ45Esn+xbdH3SjrP0QiXJzMaNHbsX84cjGVwisY1Sk7BGcUxkVUb1YvkpOfZIUCPP3grtDicSYy5DE=) 2025-04-13 00:25:47.075062 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAJS9tIL2veHo5rITPamzL9G/saC3G4oZfOrgozKEHqL1WwjyGUk1SVK5JcYmDrhqik/09xSRQ+9/pZiohi5mlU=) 2025-04-13 00:25:47.075117 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII+uLnqd5bhmTe9HIIyaf7DKev3c6XVtjfwZK8iyg9e1) 2025-04-13 00:25:47.076005 | orchestrator | 2025-04-13 00:25:47.076749 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-13 00:25:47.077843 | orchestrator | Sunday 13 April 2025 00:25:47 +0000 (0:00:01.090) 0:00:20.644 ********** 2025-04-13 00:25:48.125246 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQ+qXKPyvAugeeceLOVbazoLst2ajw5HT4qCzkwrwC08BU9w+fXO9JMw3Dt2WCLnj6pgXkqIUKm3+YA8QoHcV2Aw7DpkOsKeRCE/anNMzuhMkjKq0CaBSmkOYgNLOsYzUx/JUsiAOcMIn6JQ3z19HWhzlEIj8r3Yl+EDFQypUN5kOciLlbmoPHQJ61kBk/URqgBLCug4F378w3EFWF8i01/MFhRGlb0U7iMYUb4fiWlJBelYxjAvfHyAVBfGYqdgJ8aS64fjjCHFLxiVHJ0aLQGHpiot1yhlGr4S/mvR52JynfjeG2jPh0ymO6BIcaMD2AaVjSoTtBfIx3jCqG4mAsOsBm1vdxlwWi2G4Dhus5oCVeL+MUovPrGfNb3573MV9URKTJRQ6DHAml1V+Wlu77QvRy7lmXu/2IHHDXx9DuLyr2+YZeoIwQCAff7ga/fAWXZyDgEDxdO2XIXNuWUnO49GlV2048DCf6AGCTZW3BQomVMYDuX1Xloq/G6PN3a4M=) 2025-04-13 00:25:48.125796 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEoVqIRZHmqZqCpUpM7c3WLvsx6oaWJUd9vrpJz7sZ6Ws2Tw+iBIy8zteoxDmaAXWk6jSkiIV43zPt/fwxFvxt8=) 2025-04-13 00:25:48.125845 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHSg2stmmbW4cM0cSuHtXoEN3v8gqC35W+EGvEjKclyL) 2025-04-13 00:25:48.126947 | orchestrator | 2025-04-13 00:25:48.128036 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-13 00:25:48.128825 | orchestrator | Sunday 13 April 2025 00:25:48 +0000 (0:00:01.051) 0:00:21.695 ********** 2025-04-13 00:25:49.204285 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvPlXRDe7tR5V3YT9mso0ElYbnZVdw5u5sxH9vgOgIziTNzVUs5QB/VBwCPRRlGm8Z7tkc6wuxv178RqlGJH9eUeFZetVmNwmUBv4SKwJYSYgnR4bjMq8bzO70RMrEDgFMAlufaSbT8yymjbiQQ9cEu4yNuwwRdYvxsdQRK7B9OVFNJvnBe0uZrNg6SynNtgrGoOAv9b58ElA6CVWaV6aZitImroWzrSUpeLelpMqTjeT3iGxoSCO1C0LvBZJq2SU0X3OUImN5pOMGX3jACdfunduwvTT/FccjezG0Ize27snK8QM1N1murNIiBkOrW8Nh4a40HNW9RRQLUzH7KD2El1XBAnIZByIErqxRFZXsSU9tiQNJoqAGsajzIbXCwakDJxJbeCiCpCGmNP+1zYx+Ru8VjkmXKvlS65eXoIhHVNVQmHolTsHabbpnXqZT7ZyavtEQPJN30h5DRnsg3UKFzTFWKfta3qLCsgIVkxJxy613UlXfFV3Dr63IWeVY2Fk=) 2025-04-13 00:25:49.204642 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOHhIrqDeRLgj+zfyPXDyFUChkpZo7wq11hE/eVhjAtdIFSq6Q/Y/7XhsDLOgTLB00YWJ2mHSPd4RyY8Vxob4tg=) 2025-04-13 00:25:49.206543 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID0b1+Y3QnImwZ9tYKBiXTNWCuYeXLwlJeC//13vrb9k) 2025-04-13 00:25:49.206956 | orchestrator | 2025-04-13 00:25:49.207648 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-13 00:25:49.207808 | orchestrator | Sunday 13 April 2025 00:25:49 +0000 (0:00:01.078) 0:00:22.774 ********** 2025-04-13 00:25:50.295632 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC28zvMl3A0p4cUDnwBGpYrUSLuGDxkogNZ8jpRRa3iT6s1uE8LVe3O7pZxg56CsmAr52FGEhBIjQjALBYdZaLGoLfZ9SF8klcdWj7Pp5mVn9jo/P6i/MNRJMlf5dlSWiDlTJtRWcUefMJT7wTHRstX8kKwZ+6Re6nmZLfEMvni5pSjXAcu9k/42/j56T3rbmTBXIY4krjDaxuAVcDhV7NIPXiS9CNZ1+u6PA47aQl6VEbldbLWLlM6saM12DKndQ1JLyzHt4LVAHF+oeSpFpEqu97PM+CGL6UXlu/j9LONOOI9VOi5RJdtXjQIxVdMgc2lhMND8KDd+xAC7cI47TtFqFfmLrvq8/1RWqGDgU94A5xfW8vBJV6l4CoN7bmbhnPNSmZo7+g5A807LB62KK7lbYf49Jxxju6rfI3wafRxJbbg7xqfQ9Tjar52mlADEWCnLJdusOcgTon/ItMQTHYzURXqxlr63AAiPr2onyOLaZhbtSKI9G+hY+KvRQ6LfFM=) 2025-04-13 00:25:50.296018 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAeASxxrURZDLR0Oa+ic9n2ExTljIOXPEqpP3Ca3KgIwNEZKvbvp5Y6pw2lxz92kq1duxUm9ib2lFGaIrAsBops=) 2025-04-13 00:25:50.296076 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKhYUoQppWlu0DHqWE3cJynn9xtw28pn/S3xZN01nnTc) 2025-04-13 00:25:50.296747 | orchestrator | 2025-04-13 00:25:50.297334 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-13 00:25:50.297775 | orchestrator | Sunday 13 April 2025 00:25:50 +0000 (0:00:01.091) 0:00:23.865 ********** 2025-04-13 00:25:51.415700 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHmA8jW5Oie7LnlNH6dwW+8jHzqNRkcjDPhj/0SSJieF) 2025-04-13 00:25:51.416682 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrrPM+6dfKzIaOwhNaIfc7oVVaPuGW2X+ZDHsFISrsB57XkCx7BJ/10jZqcfmisFzk1vx8Q0H56Qernn2ib2sLqZ9bWZbmWM6kWyN1IWcTmoALqd5mvjph9rTIGT9pjNEau7wPGc92cvm5ivn64CUiBFVavwQgU0Y615lvXoxuZe5h5gRurnuTsZonGhmh47I9VSkI8LdWH5K6dlDrLaRjMYXugqwrSeIHS6V+qtZTkzqwPal4eR/3QkRUPNhHBmnyj5Qx/BoitIGNDktg0kk72fUt7h8C1RykQxeMHA2fxIWc+Aquczb1DN3XMT+/kpPcmBu81upUmio3c3Z67ah+c0hiXQwAHSwsvkCm+j1OKY4u/kisaHJWtc9MXFOc8YFnGb37vRxsYFJjGUWHmhfKa6SGOE49mjQv8b2Y15nqGpVyTlKalQwryqOhK9mxQF0b/YGyq6V11Szw3muqyJ1P2/Jf/Qy00NBnqk/ggkbkD6JRs1ZiH+lWkVRLiNFHoiE=) 2025-04-13 00:25:51.416825 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGZHwidUAOFliSf6huhRKznGoa/ksv8Po1qJpEUYXiRyrZxj9nlCZdx8CUvoKsF9p5+ms3wyfNwkBhgaVJUom0I=) 2025-04-13 00:25:51.418300 | orchestrator | 2025-04-13 00:25:51.419315 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-13 00:25:51.419630 | orchestrator | Sunday 13 April 2025 00:25:51 +0000 (0:00:01.119) 0:00:24.985 ********** 2025-04-13 00:25:52.503642 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBxIWvz4eZCMZwQel+Qql/L1myWZh8couFrHCGRhOADp) 2025-04-13 00:25:52.503793 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdmu6Zo1rG6ZPFA3Y3TB+72Jd+zojebmxpa2TlcyMAQ9IhwXRAP9hfG7uhjMAIBHlInxTHTwn3aPPJDlos52V0ILK8Vh0bHBHXjkSI3z3LlO/y9ZhADdzW4AhpR/aw+2T+uXZpDiw0QIVzXcCLxOwpEh0YzhSzpjciQ+W6MKvEQ6JVJUb85LLy8B3JxKUYYBpSnAugEWmonJ66w3G5VyqJ5QkE5baFrFx8GaLVY4v2jXiZ3gArCH2ymPwBoUG0VwyD4GQYHBu3rnBKxLX4eGbECtUDoMrBLkHLICLWboEbcCiIA8JT1gBlVjrfB1HNwL34oKV9OR5RUQHA2ZpBmaVkH9fvfqUcuX+9/4gYwbkwel6GFcnFNxKdPbW0tmXyB0tPXLncQblkldBxerXPvcqAsExFWYbnATMZahLMOHjd83oaUmxeg613acmrNDyhi2+vdN/A80pWG+2Nlu6zwiVghUcHMkU2uoU/KN2ltQNJUfzmIEu7hmHByyzuWXz8Y+M=) 2025-04-13 00:25:52.504658 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJomMmv3sBKtjIt4rRS8eUWGKDlRg8cO3yaKSdCyZ+09bMknp7SoLddOvoJSnbN8WXUw6NoEzLe46SXR4/Cj3FU=) 2025-04-13 00:25:52.505424 | orchestrator | 2025-04-13 00:25:52.505648 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-13 00:25:52.506202 | orchestrator | Sunday 13 April 2025 00:25:52 +0000 (0:00:01.087) 0:00:26.073 ********** 2025-04-13 00:25:53.573964 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO8fSiRGCxHh58z3HwP0UQWzXRo22grtAFc3/g2II33p) 2025-04-13 00:25:53.574512 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEebO+zl4ArXFZPl3WfjcFzKtI47emzXhh/HMnjQO/yBpu/ZoEZxM70Vks3xAr9C8Q1rxSQDnuYDBh8PREHXRcbDjMs2mWfuInlpwBn5XvqDoFkRhEytOgQKIPeshsTny9Zjfk05/p1Cc6mkfUBYfSWmzUGTt+QLKecnr95xZCch8bK4TWzMPwO4Y60LlkEHOX1c7Sob4+56sxiGx7jhXUIEVADjra7jCV9c87MyKlPwKsYsTpHJwx3Ga3LuvN/Y3LStkWb/FxPd6dH8Pt53kM8F0/uJbilGNH/Q+Qbc49kaFWbG2Sckexh3TGmaJ+oMLczSQjP7olL0EWZyDmT1iRPA7/BsImFLROSW1TfQ0ZtTEqwuyh5KyMs2U4s0ihgmrnocvaWhNKMKcimfe9Xil/Y7iz2P9u6Sffx8SYMRyYMcVXZqJ8hkwrj5Vk3Fih0MT3BYSkCnM4vE0EPszDGeNtsPv29M+BPHfQBVDGKdw2o5XYJZn6yNBVpZpEBiQJfR8=) 2025-04-13 00:25:53.575378 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIJmpRTVlGOlS+EN+7oSuXKlZP8XopqlyZYsrec4wYXGpb9SBdO1TqwxAxN4nGJXvD9S7RgFPz9y+upR3hTBBsI=) 2025-04-13 00:25:53.575440 | orchestrator | 2025-04-13 00:25:53.575770 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-04-13 00:25:53.576022 | orchestrator | Sunday 13 April 2025 00:25:53 +0000 (0:00:01.070) 0:00:27.143 ********** 2025-04-13 00:25:53.756184 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-04-13 00:25:53.756461 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-04-13 00:25:53.757293 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-04-13 00:25:53.759007 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-04-13 00:25:53.759715 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-04-13 00:25:53.760385 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-04-13 00:25:53.761165 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-04-13 00:25:53.761886 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:25:53.763064 | orchestrator | 2025-04-13 00:25:53.763468 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-04-13 00:25:53.764397 | orchestrator | Sunday 13 April 2025 00:25:53 +0000 (0:00:00.184) 0:00:27.327 ********** 2025-04-13 00:25:53.814691 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:25:53.816016 | orchestrator | 2025-04-13 00:25:53.816072 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-04-13 00:25:53.817335 | orchestrator | Sunday 13 April 2025 00:25:53 +0000 (0:00:00.058) 0:00:27.386 ********** 2025-04-13 00:25:53.878064 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:25:53.878264 | orchestrator | 2025-04-13 00:25:53.879431 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-04-13 00:25:53.880290 | orchestrator | Sunday 13 April 2025 00:25:53 +0000 (0:00:00.062) 0:00:27.449 ********** 2025-04-13 00:25:54.624666 | orchestrator | changed: [testbed-manager] 2025-04-13 00:25:54.625381 | orchestrator | 2025-04-13 00:25:54.627278 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:25:54.627316 | orchestrator | 2025-04-13 00:25:54 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-13 00:25:54.628114 | orchestrator | 2025-04-13 00:25:54 | INFO  | Please wait and do not abort execution. 2025-04-13 00:25:54.628149 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-13 00:25:54.629250 | orchestrator | 2025-04-13 00:25:54.630361 | orchestrator | Sunday 13 April 2025 00:25:54 +0000 (0:00:00.746) 0:00:28.195 ********** 2025-04-13 00:25:54.631653 | orchestrator | =============================================================================== 2025-04-13 00:25:54.632490 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.02s 2025-04-13 00:25:54.633225 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.33s 2025-04-13 00:25:54.633801 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.25s 2025-04-13 00:25:54.634943 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-04-13 00:25:54.635338 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-04-13 00:25:54.636901 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-04-13 00:25:54.638268 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-04-13 00:25:54.638307 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-04-13 00:25:54.638969 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-04-13 00:25:54.639004 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-04-13 00:25:54.639604 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-04-13 00:25:54.640112 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-04-13 00:25:54.640617 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-04-13 00:25:54.641255 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-04-13 00:25:54.641698 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-04-13 00:25:54.642181 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-04-13 00:25:54.642761 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.75s 2025-04-13 00:25:54.643423 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2025-04-13 00:25:54.644743 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-04-13 00:25:54.645113 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-04-13 00:25:55.014403 | orchestrator | + osism apply squid 2025-04-13 00:25:56.530619 | orchestrator | 2025-04-13 00:25:56 | INFO  | Task 0b27fc02-5b0d-4f25-aa30-de22c40473ba (squid) was prepared for execution. 2025-04-13 00:25:59.579089 | orchestrator | 2025-04-13 00:25:56 | INFO  | It takes a moment until task 0b27fc02-5b0d-4f25-aa30-de22c40473ba (squid) has been started and output is visible here. 2025-04-13 00:25:59.579228 | orchestrator | 2025-04-13 00:25:59.579697 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-04-13 00:25:59.580376 | orchestrator | 2025-04-13 00:25:59.580404 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-04-13 00:25:59.582122 | orchestrator | Sunday 13 April 2025 00:25:59 +0000 (0:00:00.108) 0:00:00.108 ********** 2025-04-13 00:25:59.696416 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-04-13 00:25:59.696786 | orchestrator | 2025-04-13 00:25:59.697310 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-04-13 00:25:59.697470 | orchestrator | Sunday 13 April 2025 00:25:59 +0000 (0:00:00.120) 0:00:00.229 ********** 2025-04-13 00:26:01.140785 | orchestrator | ok: [testbed-manager] 2025-04-13 00:26:01.142188 | orchestrator | 2025-04-13 00:26:01.142209 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-04-13 00:26:01.143018 | orchestrator | Sunday 13 April 2025 00:26:01 +0000 (0:00:01.442) 0:00:01.671 ********** 2025-04-13 00:26:02.374943 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-04-13 00:26:02.376048 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-04-13 00:26:02.377096 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-04-13 00:26:02.377430 | orchestrator | 2025-04-13 00:26:02.377467 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-04-13 00:26:02.377719 | orchestrator | Sunday 13 April 2025 00:26:02 +0000 (0:00:01.235) 0:00:02.906 ********** 2025-04-13 00:26:03.439658 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-04-13 00:26:03.440195 | orchestrator | 2025-04-13 00:26:03.442103 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-04-13 00:26:03.442382 | orchestrator | Sunday 13 April 2025 00:26:03 +0000 (0:00:01.064) 0:00:03.971 ********** 2025-04-13 00:26:03.809002 | orchestrator | ok: [testbed-manager] 2025-04-13 00:26:03.809529 | orchestrator | 2025-04-13 00:26:03.809609 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-04-13 00:26:03.810627 | orchestrator | Sunday 13 April 2025 00:26:03 +0000 (0:00:00.367) 0:00:04.339 ********** 2025-04-13 00:26:04.793465 | orchestrator | changed: [testbed-manager] 2025-04-13 00:26:04.794599 | orchestrator | 2025-04-13 00:26:04.794921 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-04-13 00:26:04.795943 | orchestrator | Sunday 13 April 2025 00:26:04 +0000 (0:00:00.985) 0:00:05.325 ********** 2025-04-13 00:26:36.982865 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-04-13 00:26:49.321602 | orchestrator | ok: [testbed-manager] 2025-04-13 00:26:49.321750 | orchestrator | 2025-04-13 00:26:49.321772 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-04-13 00:26:49.321789 | orchestrator | Sunday 13 April 2025 00:26:36 +0000 (0:00:32.184) 0:00:37.509 ********** 2025-04-13 00:26:49.321821 | orchestrator | changed: [testbed-manager] 2025-04-13 00:27:49.395669 | orchestrator | 2025-04-13 00:27:49.395905 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-04-13 00:27:49.395931 | orchestrator | Sunday 13 April 2025 00:26:49 +0000 (0:00:12.340) 0:00:49.850 ********** 2025-04-13 00:27:49.395963 | orchestrator | Pausing for 60 seconds 2025-04-13 00:27:49.469693 | orchestrator | changed: [testbed-manager] 2025-04-13 00:27:49.469812 | orchestrator | 2025-04-13 00:27:49.469831 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-04-13 00:27:49.469848 | orchestrator | Sunday 13 April 2025 00:27:49 +0000 (0:01:00.071) 0:01:49.922 ********** 2025-04-13 00:27:49.469982 | orchestrator | ok: [testbed-manager] 2025-04-13 00:27:49.470011 | orchestrator | 2025-04-13 00:27:49.470360 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-04-13 00:27:49.471298 | orchestrator | Sunday 13 April 2025 00:27:49 +0000 (0:00:00.078) 0:01:50.001 ********** 2025-04-13 00:27:50.116859 | orchestrator | changed: [testbed-manager] 2025-04-13 00:27:50.118174 | orchestrator | 2025-04-13 00:27:50.118238 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:27:50.118265 | orchestrator | 2025-04-13 00:27:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-13 00:27:50.118289 | orchestrator | 2025-04-13 00:27:50 | INFO  | Please wait and do not abort execution. 2025-04-13 00:27:50.118327 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:27:50.118943 | orchestrator | 2025-04-13 00:27:50.120019 | orchestrator | Sunday 13 April 2025 00:27:50 +0000 (0:00:00.646) 0:01:50.647 ********** 2025-04-13 00:27:50.120979 | orchestrator | =============================================================================== 2025-04-13 00:27:50.121677 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-04-13 00:27:50.122172 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.18s 2025-04-13 00:27:50.122739 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.34s 2025-04-13 00:27:50.123241 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.44s 2025-04-13 00:27:50.123868 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.24s 2025-04-13 00:27:50.124389 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.06s 2025-04-13 00:27:50.124869 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.99s 2025-04-13 00:27:50.125753 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.65s 2025-04-13 00:27:50.126337 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2025-04-13 00:27:50.126924 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.12s 2025-04-13 00:27:50.127352 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2025-04-13 00:27:50.521280 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-13 00:27:50.526129 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-04-13 00:27:50.526242 | orchestrator | ++ semver 8.1.0 9.0.0 2025-04-13 00:27:50.579376 | orchestrator | + [[ -1 -lt 0 ]] 2025-04-13 00:27:50.583112 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-13 00:27:50.583171 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-04-13 00:27:50.583198 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-04-13 00:27:50.589813 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-04-13 00:27:50.595559 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-04-13 00:27:52.093673 | orchestrator | 2025-04-13 00:27:52 | INFO  | Task fa152ba6-1740-4c3a-894d-6415f243ea57 (operator) was prepared for execution. 2025-04-13 00:27:55.130222 | orchestrator | 2025-04-13 00:27:52 | INFO  | It takes a moment until task fa152ba6-1740-4c3a-894d-6415f243ea57 (operator) has been started and output is visible here. 2025-04-13 00:27:55.130371 | orchestrator | 2025-04-13 00:27:55.131494 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-04-13 00:27:55.132838 | orchestrator | 2025-04-13 00:27:55.139168 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-13 00:27:55.139314 | orchestrator | Sunday 13 April 2025 00:27:55 +0000 (0:00:00.089) 0:00:00.089 ********** 2025-04-13 00:27:58.449662 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:27:58.449918 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:27:58.449945 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:27:58.449960 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:27:58.449981 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:27:58.450933 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:27:58.452080 | orchestrator | 2025-04-13 00:27:58.452861 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-04-13 00:27:58.454305 | orchestrator | Sunday 13 April 2025 00:27:58 +0000 (0:00:03.320) 0:00:03.410 ********** 2025-04-13 00:27:59.216060 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:27:59.219468 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:27:59.219589 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:27:59.220409 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:27:59.220481 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:27:59.220504 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:27:59.221402 | orchestrator | 2025-04-13 00:27:59.221439 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-04-13 00:27:59.222222 | orchestrator | 2025-04-13 00:27:59.222720 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-04-13 00:27:59.225468 | orchestrator | Sunday 13 April 2025 00:27:59 +0000 (0:00:00.766) 0:00:04.176 ********** 2025-04-13 00:27:59.284829 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:27:59.306201 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:27:59.325718 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:27:59.368176 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:27:59.368290 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:27:59.369211 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:27:59.369924 | orchestrator | 2025-04-13 00:27:59.370559 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-04-13 00:27:59.371188 | orchestrator | Sunday 13 April 2025 00:27:59 +0000 (0:00:00.154) 0:00:04.330 ********** 2025-04-13 00:27:59.434271 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:27:59.463267 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:27:59.480863 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:27:59.534697 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:27:59.535468 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:27:59.539080 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:28:00.137102 | orchestrator | 2025-04-13 00:28:00.137225 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-04-13 00:28:00.137246 | orchestrator | Sunday 13 April 2025 00:27:59 +0000 (0:00:00.166) 0:00:04.497 ********** 2025-04-13 00:28:00.137279 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:28:00.137773 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:28:00.139016 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:28:00.140355 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:28:00.141481 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:28:00.142148 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:28:00.142980 | orchestrator | 2025-04-13 00:28:00.143844 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-04-13 00:28:00.144197 | orchestrator | Sunday 13 April 2025 00:28:00 +0000 (0:00:00.601) 0:00:05.099 ********** 2025-04-13 00:28:00.928919 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:28:00.930338 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:28:00.931381 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:28:00.931775 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:28:00.932993 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:28:00.934296 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:28:00.934797 | orchestrator | 2025-04-13 00:28:00.936847 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-04-13 00:28:00.937493 | orchestrator | Sunday 13 April 2025 00:28:00 +0000 (0:00:00.790) 0:00:05.889 ********** 2025-04-13 00:28:02.079472 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-04-13 00:28:02.081104 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-04-13 00:28:02.081156 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-04-13 00:28:02.082507 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-04-13 00:28:02.083990 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-04-13 00:28:02.085216 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-04-13 00:28:02.088055 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-04-13 00:28:02.088963 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-04-13 00:28:02.089060 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-04-13 00:28:02.090993 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-04-13 00:28:02.091437 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-04-13 00:28:02.093168 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-04-13 00:28:02.093861 | orchestrator | 2025-04-13 00:28:02.094519 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-04-13 00:28:02.095087 | orchestrator | Sunday 13 April 2025 00:28:02 +0000 (0:00:01.147) 0:00:07.037 ********** 2025-04-13 00:28:03.379577 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:28:03.380281 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:28:03.382504 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:28:03.382694 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:28:03.382720 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:28:03.382902 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:28:03.383620 | orchestrator | 2025-04-13 00:28:03.383780 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-04-13 00:28:03.383810 | orchestrator | Sunday 13 April 2025 00:28:03 +0000 (0:00:01.299) 0:00:08.337 ********** 2025-04-13 00:28:04.656924 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-04-13 00:28:04.657723 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-04-13 00:28:04.657767 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-04-13 00:28:04.776472 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-04-13 00:28:04.777168 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-04-13 00:28:04.777212 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-04-13 00:28:04.778728 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-04-13 00:28:04.781640 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-04-13 00:28:04.781785 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-04-13 00:28:04.782580 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-04-13 00:28:04.783387 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-04-13 00:28:04.784120 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-04-13 00:28:04.787639 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-04-13 00:28:04.788448 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-04-13 00:28:04.789302 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-04-13 00:28:04.789857 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-04-13 00:28:04.790579 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-04-13 00:28:04.790989 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-04-13 00:28:04.791393 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-04-13 00:28:04.791953 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-04-13 00:28:04.792463 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-04-13 00:28:04.793240 | orchestrator | 2025-04-13 00:28:04.794456 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-04-13 00:28:04.795000 | orchestrator | Sunday 13 April 2025 00:28:04 +0000 (0:00:01.399) 0:00:09.736 ********** 2025-04-13 00:28:05.379645 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:28:05.381064 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:28:05.383316 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:28:05.383447 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:28:05.383787 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:28:05.386288 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:28:05.520333 | orchestrator | 2025-04-13 00:28:05.520446 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-04-13 00:28:05.520463 | orchestrator | Sunday 13 April 2025 00:28:05 +0000 (0:00:00.604) 0:00:10.341 ********** 2025-04-13 00:28:05.520490 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:28:05.563594 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:28:05.592742 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:28:05.644069 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:28:05.644227 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:28:05.644253 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:28:05.644640 | orchestrator | 2025-04-13 00:28:05.644958 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-04-13 00:28:05.645246 | orchestrator | Sunday 13 April 2025 00:28:05 +0000 (0:00:00.263) 0:00:10.604 ********** 2025-04-13 00:28:06.336619 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-13 00:28:06.337170 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-04-13 00:28:06.337227 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:28:06.337239 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:28:06.337249 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-04-13 00:28:06.337259 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:28:06.337274 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-13 00:28:06.337755 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:28:06.338338 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-13 00:28:06.338916 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-13 00:28:06.339322 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:28:06.339804 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:28:06.340173 | orchestrator | 2025-04-13 00:28:06.340602 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-04-13 00:28:06.341222 | orchestrator | Sunday 13 April 2025 00:28:06 +0000 (0:00:00.692) 0:00:11.297 ********** 2025-04-13 00:28:06.386952 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:28:06.411435 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:28:06.440315 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:28:06.471832 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:28:06.514119 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:28:06.515238 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:28:06.515276 | orchestrator | 2025-04-13 00:28:06.515958 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-04-13 00:28:06.516653 | orchestrator | Sunday 13 April 2025 00:28:06 +0000 (0:00:00.178) 0:00:11.476 ********** 2025-04-13 00:28:06.559598 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:28:06.579452 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:28:06.621013 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:28:06.658771 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:28:06.659240 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:28:06.659430 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:28:06.659724 | orchestrator | 2025-04-13 00:28:06.660234 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-04-13 00:28:06.660420 | orchestrator | Sunday 13 April 2025 00:28:06 +0000 (0:00:00.145) 0:00:11.621 ********** 2025-04-13 00:28:06.729865 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:28:06.748858 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:28:06.771739 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:28:06.799402 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:28:06.799714 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:28:06.799901 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:28:06.799920 | orchestrator | 2025-04-13 00:28:06.799937 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-04-13 00:28:06.800219 | orchestrator | Sunday 13 April 2025 00:28:06 +0000 (0:00:00.141) 0:00:11.763 ********** 2025-04-13 00:28:07.456035 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:28:07.456305 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:28:07.456924 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:28:07.457680 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:28:07.458347 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:28:07.458889 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:28:07.459889 | orchestrator | 2025-04-13 00:28:07.460088 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-04-13 00:28:07.461210 | orchestrator | Sunday 13 April 2025 00:28:07 +0000 (0:00:00.652) 0:00:12.415 ********** 2025-04-13 00:28:07.537807 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:28:07.559172 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:28:07.584386 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:28:07.693654 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:28:07.693931 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:28:07.694379 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:28:07.694434 | orchestrator | 2025-04-13 00:28:07.696480 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:28:07.697478 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-13 00:28:07.697517 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-13 00:28:07.697567 | orchestrator | 2025-04-13 00:28:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-13 00:28:07.697586 | orchestrator | 2025-04-13 00:28:07 | INFO  | Please wait and do not abort execution. 2025-04-13 00:28:07.697613 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-13 00:28:07.698003 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-13 00:28:07.698272 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-13 00:28:07.699963 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-13 00:28:07.700741 | orchestrator | 2025-04-13 00:28:07.700779 | orchestrator | Sunday 13 April 2025 00:28:07 +0000 (0:00:00.238) 0:00:12.654 ********** 2025-04-13 00:28:07.700798 | orchestrator | =============================================================================== 2025-04-13 00:28:07.700817 | orchestrator | Gathering Facts --------------------------------------------------------- 3.32s 2025-04-13 00:28:07.700842 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.40s 2025-04-13 00:28:07.701052 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.30s 2025-04-13 00:28:07.701846 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.15s 2025-04-13 00:28:07.702448 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.79s 2025-04-13 00:28:07.704878 | orchestrator | Do not require tty for all users ---------------------------------------- 0.77s 2025-04-13 00:28:07.704951 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2025-04-13 00:28:07.704966 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.65s 2025-04-13 00:28:07.704977 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.60s 2025-04-13 00:28:07.704990 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2025-04-13 00:28:07.705002 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.26s 2025-04-13 00:28:07.705017 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.24s 2025-04-13 00:28:07.705083 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2025-04-13 00:28:07.705910 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2025-04-13 00:28:07.706171 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.15s 2025-04-13 00:28:07.706200 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2025-04-13 00:28:07.706613 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2025-04-13 00:28:08.156998 | orchestrator | + osism apply --environment custom facts 2025-04-13 00:28:09.493930 | orchestrator | 2025-04-13 00:28:09 | INFO  | Trying to run play facts in environment custom 2025-04-13 00:28:09.543233 | orchestrator | 2025-04-13 00:28:09 | INFO  | Task 70ba549c-b423-4c07-93b2-340115d9cec4 (facts) was prepared for execution. 2025-04-13 00:28:12.651984 | orchestrator | 2025-04-13 00:28:09 | INFO  | It takes a moment until task 70ba549c-b423-4c07-93b2-340115d9cec4 (facts) has been started and output is visible here. 2025-04-13 00:28:12.652162 | orchestrator | 2025-04-13 00:28:12.653047 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-04-13 00:28:12.653099 | orchestrator | 2025-04-13 00:28:12.653848 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-13 00:28:12.654500 | orchestrator | Sunday 13 April 2025 00:28:12 +0000 (0:00:00.106) 0:00:00.106 ********** 2025-04-13 00:28:13.904297 | orchestrator | ok: [testbed-manager] 2025-04-13 00:28:14.932446 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:28:14.932742 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:28:14.933074 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:28:14.934846 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:28:14.936285 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:28:14.936341 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:28:14.937619 | orchestrator | 2025-04-13 00:28:14.938593 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-04-13 00:28:14.940362 | orchestrator | Sunday 13 April 2025 00:28:14 +0000 (0:00:02.283) 0:00:02.389 ********** 2025-04-13 00:28:16.078367 | orchestrator | ok: [testbed-manager] 2025-04-13 00:28:16.933186 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:28:16.933842 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:28:16.933887 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:28:16.934521 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:28:16.936314 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:28:16.936408 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:28:16.936896 | orchestrator | 2025-04-13 00:28:16.937409 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-04-13 00:28:16.937828 | orchestrator | 2025-04-13 00:28:16.938708 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-13 00:28:16.940168 | orchestrator | Sunday 13 April 2025 00:28:16 +0000 (0:00:01.998) 0:00:04.388 ********** 2025-04-13 00:28:17.039290 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:28:17.040049 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:28:17.040105 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:28:17.042000 | orchestrator | 2025-04-13 00:28:17.042276 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-13 00:28:17.043449 | orchestrator | Sunday 13 April 2025 00:28:17 +0000 (0:00:00.109) 0:00:04.498 ********** 2025-04-13 00:28:17.187049 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:28:17.190399 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:28:17.190888 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:28:17.190977 | orchestrator | 2025-04-13 00:28:17.190999 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-13 00:28:17.191029 | orchestrator | Sunday 13 April 2025 00:28:17 +0000 (0:00:00.146) 0:00:04.645 ********** 2025-04-13 00:28:17.305655 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:28:17.308194 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:28:17.308960 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:28:17.308983 | orchestrator | 2025-04-13 00:28:17.309767 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-13 00:28:17.310549 | orchestrator | Sunday 13 April 2025 00:28:17 +0000 (0:00:00.118) 0:00:04.763 ********** 2025-04-13 00:28:17.461731 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:28:17.463080 | orchestrator | 2025-04-13 00:28:17.463148 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-13 00:28:17.464007 | orchestrator | Sunday 13 April 2025 00:28:17 +0000 (0:00:00.153) 0:00:04.917 ********** 2025-04-13 00:28:17.874912 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:28:17.875144 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:28:17.875449 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:28:17.879039 | orchestrator | 2025-04-13 00:28:17.988640 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-13 00:28:17.988756 | orchestrator | Sunday 13 April 2025 00:28:17 +0000 (0:00:00.415) 0:00:05.333 ********** 2025-04-13 00:28:17.988818 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:28:17.989102 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:28:17.989716 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:28:17.990593 | orchestrator | 2025-04-13 00:28:17.993925 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-13 00:28:18.948746 | orchestrator | Sunday 13 April 2025 00:28:17 +0000 (0:00:00.113) 0:00:05.446 ********** 2025-04-13 00:28:18.948882 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:28:18.949481 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:28:18.950140 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:28:18.950742 | orchestrator | 2025-04-13 00:28:18.951354 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-13 00:28:18.951823 | orchestrator | Sunday 13 April 2025 00:28:18 +0000 (0:00:00.957) 0:00:06.404 ********** 2025-04-13 00:28:19.400976 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:28:19.401326 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:28:19.401369 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:28:19.401737 | orchestrator | 2025-04-13 00:28:19.401767 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-13 00:28:19.401792 | orchestrator | Sunday 13 April 2025 00:28:19 +0000 (0:00:00.453) 0:00:06.857 ********** 2025-04-13 00:28:20.467281 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:28:20.467838 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:28:20.467927 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:28:20.467952 | orchestrator | 2025-04-13 00:28:20.468020 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-13 00:28:33.405813 | orchestrator | Sunday 13 April 2025 00:28:20 +0000 (0:00:01.063) 0:00:07.920 ********** 2025-04-13 00:28:33.405979 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:28:33.406105 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:28:33.406299 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:28:33.406375 | orchestrator | 2025-04-13 00:28:33.409973 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-04-13 00:28:33.456273 | orchestrator | Sunday 13 April 2025 00:28:33 +0000 (0:00:12.934) 0:00:20.855 ********** 2025-04-13 00:28:33.456397 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:28:33.499968 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:28:33.501350 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:28:33.501432 | orchestrator | 2025-04-13 00:28:33.502814 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-04-13 00:28:33.506163 | orchestrator | Sunday 13 April 2025 00:28:33 +0000 (0:00:00.103) 0:00:20.959 ********** 2025-04-13 00:28:40.272003 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:28:40.272719 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:28:40.272767 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:28:40.273963 | orchestrator | 2025-04-13 00:28:40.274451 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-13 00:28:40.274967 | orchestrator | Sunday 13 April 2025 00:28:40 +0000 (0:00:06.766) 0:00:27.725 ********** 2025-04-13 00:28:40.708354 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:28:40.708747 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:28:40.708972 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:28:40.709354 | orchestrator | 2025-04-13 00:28:40.710146 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-04-13 00:28:40.710599 | orchestrator | Sunday 13 April 2025 00:28:40 +0000 (0:00:00.439) 0:00:28.165 ********** 2025-04-13 00:28:44.127223 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-04-13 00:28:44.127452 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-04-13 00:28:44.128449 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-04-13 00:28:44.130341 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-04-13 00:28:44.131024 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-04-13 00:28:44.133450 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-04-13 00:28:44.134098 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-04-13 00:28:44.134747 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-04-13 00:28:44.135698 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-04-13 00:28:44.136744 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-04-13 00:28:44.137565 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-04-13 00:28:44.137850 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-04-13 00:28:44.138996 | orchestrator | 2025-04-13 00:28:44.139638 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-13 00:28:44.140588 | orchestrator | Sunday 13 April 2025 00:28:44 +0000 (0:00:03.418) 0:00:31.583 ********** 2025-04-13 00:28:45.249417 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:28:45.252472 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:28:45.252511 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:28:45.252558 | orchestrator | 2025-04-13 00:28:45.252583 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-13 00:28:45.252995 | orchestrator | 2025-04-13 00:28:45.253446 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-13 00:28:45.253934 | orchestrator | Sunday 13 April 2025 00:28:45 +0000 (0:00:01.118) 0:00:32.701 ********** 2025-04-13 00:28:46.986761 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:28:50.199589 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:28:50.200240 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:28:50.201023 | orchestrator | ok: [testbed-manager] 2025-04-13 00:28:50.201789 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:28:50.202734 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:28:50.203489 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:28:50.204417 | orchestrator | 2025-04-13 00:28:50.205194 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:28:50.205620 | orchestrator | 2025-04-13 00:28:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-13 00:28:50.206809 | orchestrator | 2025-04-13 00:28:50 | INFO  | Please wait and do not abort execution. 2025-04-13 00:28:50.206840 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:28:50.207624 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:28:50.208249 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:28:50.208640 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:28:50.209271 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:28:50.209848 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:28:50.210439 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:28:50.210712 | orchestrator | 2025-04-13 00:28:50.211170 | orchestrator | Sunday 13 April 2025 00:28:50 +0000 (0:00:04.953) 0:00:37.655 ********** 2025-04-13 00:28:50.211413 | orchestrator | =============================================================================== 2025-04-13 00:28:50.211759 | orchestrator | osism.commons.repository : Update package cache ------------------------ 12.94s 2025-04-13 00:28:50.212149 | orchestrator | Install required packages (Debian) -------------------------------------- 6.77s 2025-04-13 00:28:50.212458 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.95s 2025-04-13 00:28:50.212722 | orchestrator | Copy fact files --------------------------------------------------------- 3.42s 2025-04-13 00:28:50.213065 | orchestrator | Create custom facts directory ------------------------------------------- 2.28s 2025-04-13 00:28:50.213351 | orchestrator | Copy fact file ---------------------------------------------------------- 2.00s 2025-04-13 00:28:50.213658 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.12s 2025-04-13 00:28:50.213972 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.06s 2025-04-13 00:28:50.214325 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.96s 2025-04-13 00:28:50.214587 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2025-04-13 00:28:50.214929 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2025-04-13 00:28:50.215342 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.42s 2025-04-13 00:28:50.215577 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2025-04-13 00:28:50.215886 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.15s 2025-04-13 00:28:50.216226 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.12s 2025-04-13 00:28:50.216475 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-04-13 00:28:50.216759 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-04-13 00:28:50.217029 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-04-13 00:28:50.615306 | orchestrator | + osism apply bootstrap 2025-04-13 00:28:52.091085 | orchestrator | 2025-04-13 00:28:52 | INFO  | Task 4cb9b1bd-742a-4893-9616-5c342a608026 (bootstrap) was prepared for execution. 2025-04-13 00:28:55.251144 | orchestrator | 2025-04-13 00:28:52 | INFO  | It takes a moment until task 4cb9b1bd-742a-4893-9616-5c342a608026 (bootstrap) has been started and output is visible here. 2025-04-13 00:28:55.251308 | orchestrator | 2025-04-13 00:28:55.251961 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-04-13 00:28:55.252481 | orchestrator | 2025-04-13 00:28:55.253078 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-04-13 00:28:55.254971 | orchestrator | Sunday 13 April 2025 00:28:55 +0000 (0:00:00.106) 0:00:00.106 ********** 2025-04-13 00:28:55.337869 | orchestrator | ok: [testbed-manager] 2025-04-13 00:28:55.362223 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:28:55.402149 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:28:55.437569 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:28:55.534144 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:28:55.534866 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:28:55.535698 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:28:55.536991 | orchestrator | 2025-04-13 00:28:55.537338 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-13 00:28:55.539200 | orchestrator | 2025-04-13 00:28:55.540968 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-13 00:28:59.113988 | orchestrator | Sunday 13 April 2025 00:28:55 +0000 (0:00:00.286) 0:00:00.392 ********** 2025-04-13 00:28:59.114270 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:28:59.115047 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:28:59.115208 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:28:59.115244 | orchestrator | ok: [testbed-manager] 2025-04-13 00:28:59.115313 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:28:59.116201 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:28:59.116856 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:28:59.117407 | orchestrator | 2025-04-13 00:28:59.117948 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-04-13 00:28:59.118343 | orchestrator | 2025-04-13 00:28:59.119506 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-13 00:28:59.119985 | orchestrator | Sunday 13 April 2025 00:28:59 +0000 (0:00:03.578) 0:00:03.971 ********** 2025-04-13 00:28:59.251284 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-04-13 00:28:59.253118 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-04-13 00:28:59.575032 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-04-13 00:28:59.575164 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-04-13 00:28:59.575342 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-13 00:28:59.576717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:28:59.577433 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-13 00:28:59.578341 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-04-13 00:28:59.579375 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-04-13 00:28:59.579838 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-04-13 00:28:59.580891 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-13 00:28:59.581345 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-13 00:28:59.582286 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-04-13 00:28:59.583815 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-13 00:28:59.584216 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-13 00:28:59.584787 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-13 00:28:59.585711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:28:59.586512 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-13 00:28:59.587593 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-13 00:28:59.589009 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-04-13 00:28:59.589587 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-13 00:28:59.590939 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-13 00:28:59.591487 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-13 00:28:59.592556 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:28:59.593335 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-13 00:28:59.594949 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:28:59.595637 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-04-13 00:28:59.596735 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-04-13 00:28:59.597448 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-13 00:28:59.598724 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-13 00:28:59.599178 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:28:59.604274 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-13 00:28:59.604587 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-04-13 00:28:59.606511 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:28:59.606893 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-13 00:28:59.609167 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-13 00:28:59.610426 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-13 00:28:59.613499 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-04-13 00:28:59.613823 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-13 00:28:59.614883 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-13 00:28:59.616139 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-13 00:28:59.616646 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:28:59.617097 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-13 00:28:59.618117 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-13 00:28:59.619960 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-13 00:28:59.623144 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:28:59.623177 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-13 00:28:59.623214 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-13 00:28:59.623656 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-13 00:28:59.624217 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-13 00:28:59.626811 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-13 00:28:59.626945 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-13 00:28:59.627578 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-13 00:28:59.627962 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:28:59.630589 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-13 00:28:59.631147 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:28:59.631185 | orchestrator | 2025-04-13 00:28:59.631282 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-04-13 00:28:59.631849 | orchestrator | 2025-04-13 00:28:59.634621 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-04-13 00:28:59.634994 | orchestrator | Sunday 13 April 2025 00:28:59 +0000 (0:00:00.461) 0:00:04.433 ********** 2025-04-13 00:28:59.675403 | orchestrator | ok: [testbed-manager] 2025-04-13 00:28:59.700927 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:28:59.725514 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:28:59.776282 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:28:59.777326 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:28:59.777387 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:28:59.777658 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:28:59.778483 | orchestrator | 2025-04-13 00:28:59.778766 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-04-13 00:28:59.778798 | orchestrator | Sunday 13 April 2025 00:28:59 +0000 (0:00:00.201) 0:00:04.634 ********** 2025-04-13 00:29:01.003902 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:29:01.004951 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:01.005013 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:01.007323 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:29:01.007674 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:29:01.007753 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:01.007781 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:01.007814 | orchestrator | 2025-04-13 00:29:01.009723 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-04-13 00:29:01.010432 | orchestrator | Sunday 13 April 2025 00:29:00 +0000 (0:00:01.226) 0:00:05.860 ********** 2025-04-13 00:29:02.379023 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:29:02.379683 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:02.380604 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:29:02.381440 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:02.382838 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:29:02.383555 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:02.383588 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:02.384191 | orchestrator | 2025-04-13 00:29:02.385021 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-04-13 00:29:02.385646 | orchestrator | Sunday 13 April 2025 00:29:02 +0000 (0:00:01.372) 0:00:07.233 ********** 2025-04-13 00:29:02.656725 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:29:02.657409 | orchestrator | 2025-04-13 00:29:02.658478 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-04-13 00:29:02.659541 | orchestrator | Sunday 13 April 2025 00:29:02 +0000 (0:00:00.279) 0:00:07.513 ********** 2025-04-13 00:29:04.852820 | orchestrator | changed: [testbed-manager] 2025-04-13 00:29:04.853695 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:29:04.853736 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:29:04.855004 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:29:04.855349 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:29:04.856297 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:29:04.857014 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:29:04.859202 | orchestrator | 2025-04-13 00:29:04.860854 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-04-13 00:29:04.861652 | orchestrator | Sunday 13 April 2025 00:29:04 +0000 (0:00:02.194) 0:00:09.708 ********** 2025-04-13 00:29:04.929468 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:29:05.134299 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:29:05.134470 | orchestrator | 2025-04-13 00:29:05.136233 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-04-13 00:29:05.136544 | orchestrator | Sunday 13 April 2025 00:29:05 +0000 (0:00:00.281) 0:00:09.990 ********** 2025-04-13 00:29:06.130897 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:29:06.133209 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:29:06.134409 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:29:06.135361 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:29:06.136892 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:29:06.136939 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:29:06.137314 | orchestrator | 2025-04-13 00:29:06.138038 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-04-13 00:29:06.138790 | orchestrator | Sunday 13 April 2025 00:29:06 +0000 (0:00:00.997) 0:00:10.987 ********** 2025-04-13 00:29:06.187390 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:29:06.743836 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:29:06.743988 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:29:06.745020 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:29:06.746214 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:29:06.746910 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:29:06.747584 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:29:06.748703 | orchestrator | 2025-04-13 00:29:06.749237 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-04-13 00:29:06.750372 | orchestrator | Sunday 13 April 2025 00:29:06 +0000 (0:00:00.611) 0:00:11.599 ********** 2025-04-13 00:29:06.850007 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:29:06.877184 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:29:06.902971 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:29:07.244083 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:29:07.246294 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:29:07.248087 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:29:07.248210 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:07.249642 | orchestrator | 2025-04-13 00:29:07.250951 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-04-13 00:29:07.336235 | orchestrator | Sunday 13 April 2025 00:29:07 +0000 (0:00:00.499) 0:00:12.098 ********** 2025-04-13 00:29:07.336365 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:29:07.357315 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:29:07.386275 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:29:07.411935 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:29:07.482423 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:29:07.483547 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:29:07.484106 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:29:07.485077 | orchestrator | 2025-04-13 00:29:07.485479 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-04-13 00:29:07.485969 | orchestrator | Sunday 13 April 2025 00:29:07 +0000 (0:00:00.241) 0:00:12.340 ********** 2025-04-13 00:29:07.771963 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:29:07.773852 | orchestrator | 2025-04-13 00:29:07.773891 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-04-13 00:29:07.773926 | orchestrator | Sunday 13 April 2025 00:29:07 +0000 (0:00:00.286) 0:00:12.626 ********** 2025-04-13 00:29:08.074432 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:29:08.074977 | orchestrator | 2025-04-13 00:29:08.078572 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-04-13 00:29:08.079037 | orchestrator | Sunday 13 April 2025 00:29:08 +0000 (0:00:00.304) 0:00:12.930 ********** 2025-04-13 00:29:09.260797 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:09.261194 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:09.261223 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:29:09.262167 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:29:09.262825 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:29:09.262848 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:09.263630 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:09.265598 | orchestrator | 2025-04-13 00:29:09.266713 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-04-13 00:29:09.267331 | orchestrator | Sunday 13 April 2025 00:29:09 +0000 (0:00:01.186) 0:00:14.116 ********** 2025-04-13 00:29:09.333464 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:29:09.360669 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:29:09.393207 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:29:09.415383 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:29:09.473147 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:29:09.473724 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:29:09.474355 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:29:09.474897 | orchestrator | 2025-04-13 00:29:09.475469 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-04-13 00:29:09.475982 | orchestrator | Sunday 13 April 2025 00:29:09 +0000 (0:00:00.213) 0:00:14.330 ********** 2025-04-13 00:29:10.006102 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:10.007828 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:10.011215 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:10.012357 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:10.012392 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:29:10.012413 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:29:10.013277 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:29:10.014267 | orchestrator | 2025-04-13 00:29:10.014622 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-04-13 00:29:10.015350 | orchestrator | Sunday 13 April 2025 00:29:10 +0000 (0:00:00.533) 0:00:14.863 ********** 2025-04-13 00:29:10.089112 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:29:10.114386 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:29:10.139137 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:29:10.169143 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:29:10.234108 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:29:10.234665 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:29:10.235622 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:29:10.236694 | orchestrator | 2025-04-13 00:29:10.236985 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-04-13 00:29:10.238131 | orchestrator | Sunday 13 April 2025 00:29:10 +0000 (0:00:00.228) 0:00:15.092 ********** 2025-04-13 00:29:10.812617 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:10.812795 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:29:10.813127 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:29:10.814070 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:29:10.817373 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:29:10.817470 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:29:10.817491 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:29:10.817510 | orchestrator | 2025-04-13 00:29:10.817750 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-04-13 00:29:10.818634 | orchestrator | Sunday 13 April 2025 00:29:10 +0000 (0:00:00.577) 0:00:15.669 ********** 2025-04-13 00:29:11.875472 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:11.876291 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:29:11.876336 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:29:11.876830 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:29:11.877107 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:29:11.877733 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:29:11.878243 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:29:11.878814 | orchestrator | 2025-04-13 00:29:11.879379 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-04-13 00:29:11.880026 | orchestrator | Sunday 13 April 2025 00:29:11 +0000 (0:00:01.062) 0:00:16.731 ********** 2025-04-13 00:29:12.979020 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:29:12.979235 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:12.980128 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:29:12.982240 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:12.982894 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:12.982929 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:29:12.983693 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:12.984252 | orchestrator | 2025-04-13 00:29:12.984887 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-04-13 00:29:12.985611 | orchestrator | Sunday 13 April 2025 00:29:12 +0000 (0:00:01.103) 0:00:17.835 ********** 2025-04-13 00:29:13.296380 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:29:13.296665 | orchestrator | 2025-04-13 00:29:13.296722 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-04-13 00:29:13.299559 | orchestrator | Sunday 13 April 2025 00:29:13 +0000 (0:00:00.316) 0:00:18.151 ********** 2025-04-13 00:29:13.370962 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:29:14.762197 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:29:14.762730 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:29:14.762986 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:29:14.764550 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:29:14.765063 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:29:14.766077 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:29:14.768717 | orchestrator | 2025-04-13 00:29:14.769186 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-13 00:29:14.770874 | orchestrator | Sunday 13 April 2025 00:29:14 +0000 (0:00:01.466) 0:00:19.618 ********** 2025-04-13 00:29:14.871890 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:14.900886 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:14.935346 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:14.956325 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:15.009841 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:29:15.009999 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:29:15.010669 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:29:15.012807 | orchestrator | 2025-04-13 00:29:15.013367 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-13 00:29:15.014545 | orchestrator | Sunday 13 April 2025 00:29:15 +0000 (0:00:00.249) 0:00:19.867 ********** 2025-04-13 00:29:15.082794 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:15.123651 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:15.151000 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:15.219015 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:15.220188 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:29:15.221559 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:29:15.223069 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:29:15.224214 | orchestrator | 2025-04-13 00:29:15.225333 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-13 00:29:15.226335 | orchestrator | Sunday 13 April 2025 00:29:15 +0000 (0:00:00.209) 0:00:20.076 ********** 2025-04-13 00:29:15.289829 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:15.316483 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:15.354308 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:15.376641 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:15.453582 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:29:15.454195 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:29:15.454810 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:29:15.455332 | orchestrator | 2025-04-13 00:29:15.455927 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-13 00:29:15.456373 | orchestrator | Sunday 13 April 2025 00:29:15 +0000 (0:00:00.234) 0:00:20.311 ********** 2025-04-13 00:29:15.757238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:29:15.757633 | orchestrator | 2025-04-13 00:29:15.758599 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-13 00:29:15.759380 | orchestrator | Sunday 13 April 2025 00:29:15 +0000 (0:00:00.303) 0:00:20.614 ********** 2025-04-13 00:29:16.314185 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:16.314826 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:16.315279 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:16.315879 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:16.316850 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:29:16.317108 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:29:16.318399 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:29:16.318786 | orchestrator | 2025-04-13 00:29:16.319570 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-13 00:29:16.320244 | orchestrator | Sunday 13 April 2025 00:29:16 +0000 (0:00:00.555) 0:00:21.170 ********** 2025-04-13 00:29:16.387779 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:29:16.412881 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:29:16.436130 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:29:16.459461 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:29:16.519975 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:29:16.520696 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:29:16.522262 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:29:16.523103 | orchestrator | 2025-04-13 00:29:16.524110 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-13 00:29:16.526146 | orchestrator | Sunday 13 April 2025 00:29:16 +0000 (0:00:00.207) 0:00:21.377 ********** 2025-04-13 00:29:17.581595 | orchestrator | changed: [testbed-manager] 2025-04-13 00:29:17.582734 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:17.582803 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:17.582846 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:17.583841 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:29:17.583875 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:29:17.583895 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:29:18.175752 | orchestrator | 2025-04-13 00:29:18.175874 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-13 00:29:18.175893 | orchestrator | Sunday 13 April 2025 00:29:17 +0000 (0:00:01.060) 0:00:22.437 ********** 2025-04-13 00:29:18.175925 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:18.176447 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:18.177137 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:18.177692 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:29:18.178596 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:29:18.180822 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:18.181081 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:29:18.181823 | orchestrator | 2025-04-13 00:29:18.182342 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-13 00:29:18.182636 | orchestrator | Sunday 13 April 2025 00:29:18 +0000 (0:00:00.593) 0:00:23.030 ********** 2025-04-13 00:29:19.454952 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:19.456063 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:19.457028 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:19.458560 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:19.460277 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:29:19.461075 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:29:19.462258 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:29:19.462777 | orchestrator | 2025-04-13 00:29:19.463623 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-13 00:29:19.464833 | orchestrator | Sunday 13 April 2025 00:29:19 +0000 (0:00:01.278) 0:00:24.309 ********** 2025-04-13 00:29:32.088924 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:32.089115 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:32.089140 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:32.089156 | orchestrator | changed: [testbed-manager] 2025-04-13 00:29:32.089172 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:29:32.089193 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:29:32.090285 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:29:32.091002 | orchestrator | 2025-04-13 00:29:32.091749 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-04-13 00:29:32.093634 | orchestrator | Sunday 13 April 2025 00:29:32 +0000 (0:00:12.631) 0:00:36.940 ********** 2025-04-13 00:29:32.163785 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:32.197018 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:32.230241 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:32.259066 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:32.328029 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:29:32.328159 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:29:32.330136 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:29:32.330234 | orchestrator | 2025-04-13 00:29:32.330447 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-04-13 00:29:32.330478 | orchestrator | Sunday 13 April 2025 00:29:32 +0000 (0:00:00.245) 0:00:37.186 ********** 2025-04-13 00:29:32.430957 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:32.463433 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:32.490899 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:32.594975 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:32.595381 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:29:32.595933 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:29:32.596552 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:29:32.597054 | orchestrator | 2025-04-13 00:29:32.597619 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-04-13 00:29:32.598458 | orchestrator | Sunday 13 April 2025 00:29:32 +0000 (0:00:00.266) 0:00:37.453 ********** 2025-04-13 00:29:32.691689 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:32.712638 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:32.740448 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:32.785690 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:32.848474 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:29:32.848917 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:29:32.849826 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:29:32.850376 | orchestrator | 2025-04-13 00:29:32.851378 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-04-13 00:29:32.851844 | orchestrator | Sunday 13 April 2025 00:29:32 +0000 (0:00:00.252) 0:00:37.705 ********** 2025-04-13 00:29:33.153648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:29:33.153827 | orchestrator | 2025-04-13 00:29:33.154718 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-04-13 00:29:33.154790 | orchestrator | Sunday 13 April 2025 00:29:33 +0000 (0:00:00.304) 0:00:38.010 ********** 2025-04-13 00:29:34.703871 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:34.704786 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:34.704832 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:34.705563 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:29:34.706429 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:34.707619 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:29:34.707993 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:29:34.709020 | orchestrator | 2025-04-13 00:29:34.709999 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-04-13 00:29:34.710713 | orchestrator | Sunday 13 April 2025 00:29:34 +0000 (0:00:01.547) 0:00:39.557 ********** 2025-04-13 00:29:35.755170 | orchestrator | changed: [testbed-manager] 2025-04-13 00:29:35.755938 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:29:35.756629 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:29:35.758473 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:29:35.759283 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:29:35.760983 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:29:36.628580 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:29:36.628677 | orchestrator | 2025-04-13 00:29:36.628690 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-04-13 00:29:36.628700 | orchestrator | Sunday 13 April 2025 00:29:35 +0000 (0:00:01.053) 0:00:40.611 ********** 2025-04-13 00:29:36.628721 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:36.628994 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:36.630611 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:36.631046 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:36.631197 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:29:36.631659 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:29:36.633436 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:29:36.634857 | orchestrator | 2025-04-13 00:29:36.635303 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-04-13 00:29:36.636283 | orchestrator | Sunday 13 April 2025 00:29:36 +0000 (0:00:00.872) 0:00:41.484 ********** 2025-04-13 00:29:36.934820 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:29:36.935945 | orchestrator | 2025-04-13 00:29:36.936900 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-04-13 00:29:36.938470 | orchestrator | Sunday 13 April 2025 00:29:36 +0000 (0:00:00.307) 0:00:41.791 ********** 2025-04-13 00:29:37.963863 | orchestrator | changed: [testbed-manager] 2025-04-13 00:29:37.964587 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:29:37.966615 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:29:37.967083 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:29:37.968247 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:29:37.969610 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:29:37.970361 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:29:37.971330 | orchestrator | 2025-04-13 00:29:37.972832 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-04-13 00:29:38.089231 | orchestrator | Sunday 13 April 2025 00:29:37 +0000 (0:00:01.026) 0:00:42.818 ********** 2025-04-13 00:29:38.089363 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:29:38.116930 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:29:38.142815 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:29:38.294188 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:29:38.296058 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:29:38.297165 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:29:38.298098 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:29:38.299278 | orchestrator | 2025-04-13 00:29:38.300881 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-04-13 00:29:38.301868 | orchestrator | Sunday 13 April 2025 00:29:38 +0000 (0:00:00.332) 0:00:43.150 ********** 2025-04-13 00:29:50.584015 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:29:50.584340 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:29:50.584425 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:29:50.584448 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:29:50.584472 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:29:50.584574 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:29:50.585470 | orchestrator | changed: [testbed-manager] 2025-04-13 00:29:50.586394 | orchestrator | 2025-04-13 00:29:50.586657 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-04-13 00:29:50.587489 | orchestrator | Sunday 13 April 2025 00:29:50 +0000 (0:00:12.287) 0:00:55.437 ********** 2025-04-13 00:29:51.667960 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:29:51.668418 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:51.668461 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:51.668830 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:29:51.669652 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:51.670360 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:29:51.670499 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:51.671590 | orchestrator | 2025-04-13 00:29:51.674724 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-04-13 00:29:51.675117 | orchestrator | Sunday 13 April 2025 00:29:51 +0000 (0:00:01.087) 0:00:56.525 ********** 2025-04-13 00:29:52.569579 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:52.570490 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:52.570732 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:52.570829 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:52.571316 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:29:52.571703 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:29:52.574223 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:29:52.574713 | orchestrator | 2025-04-13 00:29:52.574753 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-04-13 00:29:52.574782 | orchestrator | Sunday 13 April 2025 00:29:52 +0000 (0:00:00.900) 0:00:57.425 ********** 2025-04-13 00:29:52.649292 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:52.679386 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:52.710701 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:52.743322 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:52.801231 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:29:52.802789 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:29:52.803955 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:29:52.804729 | orchestrator | 2025-04-13 00:29:52.805786 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-04-13 00:29:52.808386 | orchestrator | Sunday 13 April 2025 00:29:52 +0000 (0:00:00.233) 0:00:57.659 ********** 2025-04-13 00:29:52.911979 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:52.937556 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:52.966987 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:53.028290 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:53.028884 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:29:53.030090 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:29:53.031005 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:29:53.031639 | orchestrator | 2025-04-13 00:29:53.033325 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-04-13 00:29:53.034455 | orchestrator | Sunday 13 April 2025 00:29:53 +0000 (0:00:00.226) 0:00:57.885 ********** 2025-04-13 00:29:53.347089 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:29:53.347758 | orchestrator | 2025-04-13 00:29:53.348254 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-04-13 00:29:53.349198 | orchestrator | Sunday 13 April 2025 00:29:53 +0000 (0:00:00.318) 0:00:58.203 ********** 2025-04-13 00:29:54.873437 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:54.879444 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:54.880373 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:29:54.881683 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:29:54.882639 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:29:54.883339 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:54.883970 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:54.884598 | orchestrator | 2025-04-13 00:29:54.885184 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-04-13 00:29:54.885946 | orchestrator | Sunday 13 April 2025 00:29:54 +0000 (0:00:01.524) 0:00:59.728 ********** 2025-04-13 00:29:55.447216 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:29:55.447423 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:29:55.447448 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:29:55.447464 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:29:55.447478 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:29:55.447492 | orchestrator | changed: [testbed-manager] 2025-04-13 00:29:55.447625 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:29:55.447654 | orchestrator | 2025-04-13 00:29:55.447723 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-04-13 00:29:55.447891 | orchestrator | Sunday 13 April 2025 00:29:55 +0000 (0:00:00.574) 0:01:00.303 ********** 2025-04-13 00:29:55.529332 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:55.553810 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:55.588328 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:55.612633 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:55.691979 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:29:55.692749 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:29:55.693241 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:29:55.697312 | orchestrator | 2025-04-13 00:29:55.698321 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-04-13 00:29:55.698884 | orchestrator | Sunday 13 April 2025 00:29:55 +0000 (0:00:00.245) 0:01:00.549 ********** 2025-04-13 00:29:56.722127 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:29:56.722676 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:29:56.722730 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:29:56.723328 | orchestrator | ok: [testbed-manager] 2025-04-13 00:29:56.724051 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:29:56.724501 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:29:56.724969 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:29:56.725498 | orchestrator | 2025-04-13 00:29:56.727582 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-04-13 00:29:56.727701 | orchestrator | Sunday 13 April 2025 00:29:56 +0000 (0:00:01.029) 0:01:01.578 ********** 2025-04-13 00:29:58.195996 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:29:58.196451 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:29:58.198456 | orchestrator | changed: [testbed-manager] 2025-04-13 00:29:58.200573 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:29:58.201865 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:29:58.202469 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:29:58.203008 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:29:58.203808 | orchestrator | 2025-04-13 00:29:58.204554 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-04-13 00:29:58.205687 | orchestrator | Sunday 13 April 2025 00:29:58 +0000 (0:00:01.471) 0:01:03.050 ********** 2025-04-13 00:30:00.329494 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:30:00.331029 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:30:00.331313 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:30:00.332400 | orchestrator | ok: [testbed-manager] 2025-04-13 00:30:00.334452 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:30:00.335659 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:30:00.336420 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:30:00.336901 | orchestrator | 2025-04-13 00:30:00.337787 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-04-13 00:30:00.338200 | orchestrator | Sunday 13 April 2025 00:30:00 +0000 (0:00:02.135) 0:01:05.185 ********** 2025-04-13 00:30:37.263057 | orchestrator | ok: [testbed-manager] 2025-04-13 00:30:37.263629 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:30:37.263671 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:30:37.263695 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:30:37.263930 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:30:37.265495 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:30:37.266205 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:30:37.267164 | orchestrator | 2025-04-13 00:30:37.268038 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-04-13 00:30:37.268711 | orchestrator | Sunday 13 April 2025 00:30:37 +0000 (0:00:36.928) 0:01:42.113 ********** 2025-04-13 00:31:58.388078 | orchestrator | changed: [testbed-manager] 2025-04-13 00:31:58.388695 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:31:58.388731 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:31:58.388775 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:31:58.389581 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:31:58.390660 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:31:58.392046 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:31:58.392672 | orchestrator | 2025-04-13 00:31:58.393657 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-04-13 00:31:58.394103 | orchestrator | Sunday 13 April 2025 00:31:58 +0000 (0:01:21.126) 0:03:03.240 ********** 2025-04-13 00:31:59.940616 | orchestrator | ok: [testbed-manager] 2025-04-13 00:31:59.941261 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:31:59.942392 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:31:59.943227 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:31:59.944069 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:31:59.944532 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:31:59.945313 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:31:59.946077 | orchestrator | 2025-04-13 00:31:59.946639 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-04-13 00:31:59.947858 | orchestrator | Sunday 13 April 2025 00:31:59 +0000 (0:00:01.556) 0:03:04.796 ********** 2025-04-13 00:32:12.168826 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:32:12.170630 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:32:12.170737 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:32:12.170770 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:32:12.170800 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:32:12.171271 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:32:12.171576 | orchestrator | changed: [testbed-manager] 2025-04-13 00:32:12.172562 | orchestrator | 2025-04-13 00:32:12.173340 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-04-13 00:32:12.173867 | orchestrator | Sunday 13 April 2025 00:32:12 +0000 (0:00:12.222) 0:03:17.019 ********** 2025-04-13 00:32:12.532628 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-04-13 00:32:12.533326 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-04-13 00:32:12.535075 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-04-13 00:32:12.536323 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-04-13 00:32:12.537789 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-04-13 00:32:12.538137 | orchestrator | 2025-04-13 00:32:12.539015 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-04-13 00:32:12.541376 | orchestrator | Sunday 13 April 2025 00:32:12 +0000 (0:00:00.367) 0:03:17.387 ********** 2025-04-13 00:32:12.591457 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-13 00:32:12.624814 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:32:12.625541 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-13 00:32:12.665658 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:32:12.699486 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-13 00:32:12.699619 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:32:12.735398 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-13 00:32:12.735560 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:32:14.259038 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-13 00:32:14.259422 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-13 00:32:14.260561 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-13 00:32:14.261884 | orchestrator | 2025-04-13 00:32:14.262424 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-04-13 00:32:14.263416 | orchestrator | Sunday 13 April 2025 00:32:14 +0000 (0:00:01.726) 0:03:19.114 ********** 2025-04-13 00:32:14.331085 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-13 00:32:14.331287 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-13 00:32:14.331687 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-13 00:32:14.332186 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-13 00:32:14.332423 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-13 00:32:14.332932 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-13 00:32:14.333770 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-13 00:32:14.395250 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-13 00:32:14.395488 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-13 00:32:14.396367 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-13 00:32:14.396445 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-13 00:32:14.396677 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-13 00:32:14.397163 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-13 00:32:14.397699 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-13 00:32:14.398238 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-13 00:32:14.398624 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-13 00:32:14.399005 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-13 00:32:14.399536 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-13 00:32:14.400121 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-13 00:32:14.400639 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-13 00:32:14.401267 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-13 00:32:14.401573 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-13 00:32:14.402051 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-13 00:32:14.402584 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-13 00:32:14.403270 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-13 00:32:14.403574 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-13 00:32:14.403593 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-13 00:32:14.403908 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-13 00:32:14.404589 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-13 00:32:14.452548 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:32:14.452673 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-13 00:32:14.452982 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-13 00:32:14.453732 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-13 00:32:14.454070 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-13 00:32:14.454454 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-13 00:32:14.454896 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-13 00:32:14.455272 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-13 00:32:14.497783 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:32:14.497861 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-13 00:32:14.498007 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-13 00:32:14.498439 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-13 00:32:14.498849 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-13 00:32:14.522385 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:32:17.976880 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:32:17.977063 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-13 00:32:17.977095 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-13 00:32:17.978395 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-13 00:32:17.980200 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-13 00:32:17.981309 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-13 00:32:17.982774 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-13 00:32:17.984219 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-13 00:32:17.985005 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-13 00:32:17.985826 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-13 00:32:17.986621 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-13 00:32:17.987597 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-13 00:32:17.988020 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-13 00:32:17.988741 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-13 00:32:17.989306 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-13 00:32:17.991091 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-13 00:32:17.992241 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-13 00:32:17.993465 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-13 00:32:17.994571 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-13 00:32:17.995018 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-13 00:32:17.995897 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-13 00:32:17.996566 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-13 00:32:17.997301 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-13 00:32:17.998891 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-13 00:32:17.999893 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-13 00:32:18.000755 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-13 00:32:18.001731 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-13 00:32:18.002123 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-13 00:32:18.002966 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-13 00:32:18.003941 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-13 00:32:18.004657 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-13 00:32:18.005554 | orchestrator | 2025-04-13 00:32:18.006319 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-04-13 00:32:18.008175 | orchestrator | Sunday 13 April 2025 00:32:17 +0000 (0:00:03.718) 0:03:22.833 ********** 2025-04-13 00:32:18.523691 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-13 00:32:18.524160 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-13 00:32:18.524326 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-13 00:32:18.524973 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-13 00:32:18.525552 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-13 00:32:18.526152 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-13 00:32:18.526647 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-13 00:32:18.526932 | orchestrator | 2025-04-13 00:32:18.527881 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-04-13 00:32:18.528202 | orchestrator | Sunday 13 April 2025 00:32:18 +0000 (0:00:00.548) 0:03:23.381 ********** 2025-04-13 00:32:18.581472 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-13 00:32:18.606302 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:32:18.669002 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-13 00:32:18.697879 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:32:18.698961 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-13 00:32:19.005191 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:32:19.006723 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-13 00:32:19.008467 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:32:19.008761 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-13 00:32:19.011182 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-13 00:32:19.013503 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-13 00:32:19.013559 | orchestrator | 2025-04-13 00:32:19.013573 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-04-13 00:32:19.014720 | orchestrator | Sunday 13 April 2025 00:32:18 +0000 (0:00:00.480) 0:03:23.862 ********** 2025-04-13 00:32:19.063189 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-13 00:32:19.086321 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:32:19.186553 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-13 00:32:19.186661 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-13 00:32:19.599336 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:32:19.600804 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:32:19.602376 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-13 00:32:19.602425 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:32:19.603828 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-13 00:32:19.605130 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-13 00:32:19.605786 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-13 00:32:19.606698 | orchestrator | 2025-04-13 00:32:19.607627 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-04-13 00:32:19.608325 | orchestrator | Sunday 13 April 2025 00:32:19 +0000 (0:00:00.594) 0:03:24.456 ********** 2025-04-13 00:32:19.683390 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:32:19.715316 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:32:19.741196 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:32:19.764477 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:32:19.922342 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:32:19.922572 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:32:19.923375 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:32:19.924420 | orchestrator | 2025-04-13 00:32:19.924572 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-04-13 00:32:25.676836 | orchestrator | Sunday 13 April 2025 00:32:19 +0000 (0:00:00.318) 0:03:24.775 ********** 2025-04-13 00:32:25.676984 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:32:25.677237 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:32:25.678509 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:32:25.679293 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:32:25.679777 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:32:25.680275 | orchestrator | ok: [testbed-manager] 2025-04-13 00:32:25.680910 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:32:25.681563 | orchestrator | 2025-04-13 00:32:25.682289 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-04-13 00:32:25.683087 | orchestrator | Sunday 13 April 2025 00:32:25 +0000 (0:00:05.757) 0:03:30.533 ********** 2025-04-13 00:32:25.759619 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-04-13 00:32:25.760012 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-04-13 00:32:25.796157 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:32:25.844018 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:32:25.844181 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-04-13 00:32:25.845355 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-04-13 00:32:25.876305 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:32:25.917665 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:32:25.999202 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-04-13 00:32:25.999305 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-04-13 00:32:25.999334 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:32:25.999843 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:32:26.000432 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-04-13 00:32:26.000886 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:32:26.002291 | orchestrator | 2025-04-13 00:32:26.002597 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-04-13 00:32:26.002625 | orchestrator | Sunday 13 April 2025 00:32:25 +0000 (0:00:00.324) 0:03:30.857 ********** 2025-04-13 00:32:27.035799 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-04-13 00:32:27.036866 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-04-13 00:32:27.038713 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-04-13 00:32:27.039506 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-04-13 00:32:27.039698 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-04-13 00:32:27.040325 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-04-13 00:32:27.040900 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-04-13 00:32:27.041936 | orchestrator | 2025-04-13 00:32:27.042870 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-04-13 00:32:27.042910 | orchestrator | Sunday 13 April 2025 00:32:27 +0000 (0:00:01.029) 0:03:31.887 ********** 2025-04-13 00:32:27.541423 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:32:27.542103 | orchestrator | 2025-04-13 00:32:27.542143 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-04-13 00:32:27.542168 | orchestrator | Sunday 13 April 2025 00:32:27 +0000 (0:00:00.509) 0:03:32.396 ********** 2025-04-13 00:32:28.695732 | orchestrator | ok: [testbed-manager] 2025-04-13 00:32:28.695922 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:32:28.696561 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:32:28.697772 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:32:28.699381 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:32:28.700113 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:32:28.700815 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:32:28.701554 | orchestrator | 2025-04-13 00:32:28.702228 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-04-13 00:32:28.702878 | orchestrator | Sunday 13 April 2025 00:32:28 +0000 (0:00:01.155) 0:03:33.552 ********** 2025-04-13 00:32:29.300004 | orchestrator | ok: [testbed-manager] 2025-04-13 00:32:29.300892 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:32:29.300959 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:32:29.301214 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:32:29.301902 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:32:29.302424 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:32:29.302945 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:32:29.303426 | orchestrator | 2025-04-13 00:32:29.304194 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-04-13 00:32:29.304571 | orchestrator | Sunday 13 April 2025 00:32:29 +0000 (0:00:00.604) 0:03:34.156 ********** 2025-04-13 00:32:29.918963 | orchestrator | changed: [testbed-manager] 2025-04-13 00:32:29.919310 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:32:29.920787 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:32:29.921478 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:32:29.922570 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:32:29.923610 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:32:29.924178 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:32:29.924879 | orchestrator | 2025-04-13 00:32:29.925767 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-04-13 00:32:29.926100 | orchestrator | Sunday 13 April 2025 00:32:29 +0000 (0:00:00.617) 0:03:34.774 ********** 2025-04-13 00:32:30.455697 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:32:30.456339 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:32:30.456464 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:32:30.460149 | orchestrator | ok: [testbed-manager] 2025-04-13 00:32:30.460564 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:32:30.461582 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:32:30.463443 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:32:30.463510 | orchestrator | 2025-04-13 00:32:30.464090 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-04-13 00:32:30.464958 | orchestrator | Sunday 13 April 2025 00:32:30 +0000 (0:00:00.538) 0:03:35.312 ********** 2025-04-13 00:32:31.410405 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744502663.0881143, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 00:32:31.410780 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744502673.6925988, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 00:32:31.410813 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744502688.0617776, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 00:32:31.410827 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744502669.0172486, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 00:32:31.410860 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744502683.1425335, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 00:32:31.410883 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744502670.3255296, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 00:32:31.411684 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744502672.357525, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 00:32:31.413445 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744502686.5745437, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 00:32:31.414368 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744502603.9857607, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 00:32:31.414399 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744502618.405698, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 00:32:31.414420 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744502598.3451428, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 00:32:31.414763 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744502600.5514946, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 00:32:31.415297 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744502611.433524, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 00:32:31.416314 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744502601.7944763, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 00:32:31.417043 | orchestrator | 2025-04-13 00:32:31.417459 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-04-13 00:32:31.417748 | orchestrator | Sunday 13 April 2025 00:32:31 +0000 (0:00:00.949) 0:03:36.262 ********** 2025-04-13 00:32:32.494298 | orchestrator | changed: [testbed-manager] 2025-04-13 00:32:32.495837 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:32:32.496552 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:32:32.498303 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:32:32.498602 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:32:32.499717 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:32:32.500370 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:32:32.501160 | orchestrator | 2025-04-13 00:32:32.501609 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-04-13 00:32:32.502915 | orchestrator | Sunday 13 April 2025 00:32:32 +0000 (0:00:01.087) 0:03:37.349 ********** 2025-04-13 00:32:33.668120 | orchestrator | changed: [testbed-manager] 2025-04-13 00:32:33.668287 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:32:33.668318 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:32:33.669047 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:32:33.669296 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:32:33.669789 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:32:33.670664 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:32:33.671019 | orchestrator | 2025-04-13 00:32:33.671223 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-04-13 00:32:33.672005 | orchestrator | Sunday 13 April 2025 00:32:33 +0000 (0:00:01.173) 0:03:38.523 ********** 2025-04-13 00:32:33.738568 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:32:33.772675 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:32:33.846143 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:32:33.884506 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:32:33.949660 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:32:33.950966 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:32:33.954320 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:32:33.954610 | orchestrator | 2025-04-13 00:32:33.954644 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-04-13 00:32:33.954666 | orchestrator | Sunday 13 April 2025 00:32:33 +0000 (0:00:00.283) 0:03:38.807 ********** 2025-04-13 00:32:34.773153 | orchestrator | ok: [testbed-manager] 2025-04-13 00:32:34.774307 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:32:34.774454 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:32:34.775939 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:32:34.776284 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:32:34.777668 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:32:34.779503 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:32:34.779947 | orchestrator | 2025-04-13 00:32:34.781509 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-04-13 00:32:34.783907 | orchestrator | Sunday 13 April 2025 00:32:34 +0000 (0:00:00.822) 0:03:39.630 ********** 2025-04-13 00:32:35.191253 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:32:35.191813 | orchestrator | 2025-04-13 00:32:35.193095 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-04-13 00:32:35.193661 | orchestrator | Sunday 13 April 2025 00:32:35 +0000 (0:00:00.416) 0:03:40.046 ********** 2025-04-13 00:32:42.580612 | orchestrator | ok: [testbed-manager] 2025-04-13 00:32:42.581527 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:32:42.581886 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:32:42.583088 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:32:42.583914 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:32:42.585902 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:32:42.586369 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:32:42.587206 | orchestrator | 2025-04-13 00:32:42.587770 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-04-13 00:32:42.588325 | orchestrator | Sunday 13 April 2025 00:32:42 +0000 (0:00:07.389) 0:03:47.436 ********** 2025-04-13 00:32:43.829999 | orchestrator | ok: [testbed-manager] 2025-04-13 00:32:43.830361 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:32:43.831289 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:32:43.832125 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:32:43.835354 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:32:43.836113 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:32:43.836332 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:32:43.836923 | orchestrator | 2025-04-13 00:32:43.837231 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-04-13 00:32:43.838266 | orchestrator | Sunday 13 April 2025 00:32:43 +0000 (0:00:01.247) 0:03:48.683 ********** 2025-04-13 00:32:44.802529 | orchestrator | ok: [testbed-manager] 2025-04-13 00:32:44.802808 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:32:44.803446 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:32:44.804048 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:32:44.808059 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:32:44.808671 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:32:44.808696 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:32:44.808710 | orchestrator | 2025-04-13 00:32:44.808726 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-04-13 00:32:44.808747 | orchestrator | Sunday 13 April 2025 00:32:44 +0000 (0:00:00.975) 0:03:49.658 ********** 2025-04-13 00:32:45.200118 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:32:45.202624 | orchestrator | 2025-04-13 00:32:53.302357 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-04-13 00:32:53.302524 | orchestrator | Sunday 13 April 2025 00:32:45 +0000 (0:00:00.398) 0:03:50.057 ********** 2025-04-13 00:32:53.302593 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:32:53.302706 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:32:53.302731 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:32:53.303703 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:32:53.304771 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:32:53.305179 | orchestrator | changed: [testbed-manager] 2025-04-13 00:32:53.305680 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:32:53.306356 | orchestrator | 2025-04-13 00:32:53.306915 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-04-13 00:32:53.307636 | orchestrator | Sunday 13 April 2025 00:32:53 +0000 (0:00:08.100) 0:03:58.157 ********** 2025-04-13 00:32:53.942802 | orchestrator | changed: [testbed-manager] 2025-04-13 00:32:53.943617 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:32:53.943755 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:32:53.945554 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:32:53.946253 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:32:53.947051 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:32:53.947750 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:32:53.948765 | orchestrator | 2025-04-13 00:32:53.949323 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-04-13 00:32:53.949918 | orchestrator | Sunday 13 April 2025 00:32:53 +0000 (0:00:00.643) 0:03:58.800 ********** 2025-04-13 00:32:55.002251 | orchestrator | changed: [testbed-manager] 2025-04-13 00:32:55.002527 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:32:55.003466 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:32:55.004401 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:32:55.005818 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:32:55.006724 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:32:55.007805 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:32:55.009146 | orchestrator | 2025-04-13 00:32:55.009890 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-04-13 00:32:55.010996 | orchestrator | Sunday 13 April 2025 00:32:54 +0000 (0:00:01.058) 0:03:59.858 ********** 2025-04-13 00:32:56.029960 | orchestrator | changed: [testbed-manager] 2025-04-13 00:32:56.033556 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:32:56.033633 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:32:56.034413 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:32:56.034923 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:32:56.035963 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:32:56.036806 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:32:56.037544 | orchestrator | 2025-04-13 00:32:56.037860 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-04-13 00:32:56.038316 | orchestrator | Sunday 13 April 2025 00:32:56 +0000 (0:00:01.026) 0:04:00.885 ********** 2025-04-13 00:32:56.117224 | orchestrator | ok: [testbed-manager] 2025-04-13 00:32:56.190789 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:32:56.236671 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:32:56.272394 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:32:56.369179 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:32:56.369352 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:32:56.370483 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:32:56.371739 | orchestrator | 2025-04-13 00:32:56.371864 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-04-13 00:32:56.376133 | orchestrator | Sunday 13 April 2025 00:32:56 +0000 (0:00:00.340) 0:04:01.225 ********** 2025-04-13 00:32:56.479669 | orchestrator | ok: [testbed-manager] 2025-04-13 00:32:56.512431 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:32:56.559001 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:32:56.614228 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:32:56.707127 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:32:56.707388 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:32:56.707425 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:32:56.707681 | orchestrator | 2025-04-13 00:32:56.707714 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-04-13 00:32:56.707953 | orchestrator | Sunday 13 April 2025 00:32:56 +0000 (0:00:00.338) 0:04:01.564 ********** 2025-04-13 00:32:56.816286 | orchestrator | ok: [testbed-manager] 2025-04-13 00:32:56.851504 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:32:56.883120 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:32:56.921897 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:32:57.001045 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:32:57.001344 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:32:57.001417 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:32:57.001846 | orchestrator | 2025-04-13 00:32:57.002239 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-04-13 00:32:57.002771 | orchestrator | Sunday 13 April 2025 00:32:56 +0000 (0:00:00.294) 0:04:01.859 ********** 2025-04-13 00:33:02.829859 | orchestrator | ok: [testbed-manager] 2025-04-13 00:33:02.829998 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:33:02.830978 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:33:02.830998 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:33:02.831008 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:33:02.831020 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:33:03.245392 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:33:03.245495 | orchestrator | 2025-04-13 00:33:03.245507 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-04-13 00:33:03.245516 | orchestrator | Sunday 13 April 2025 00:33:02 +0000 (0:00:05.828) 0:04:07.687 ********** 2025-04-13 00:33:03.245537 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:33:03.246234 | orchestrator | 2025-04-13 00:33:03.246262 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-04-13 00:33:03.246708 | orchestrator | Sunday 13 April 2025 00:33:03 +0000 (0:00:00.408) 0:04:08.096 ********** 2025-04-13 00:33:03.321511 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-04-13 00:33:03.321767 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-04-13 00:33:03.361156 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-04-13 00:33:03.361268 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-04-13 00:33:03.361995 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:33:03.362937 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-04-13 00:33:03.402687 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:33:03.403085 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-04-13 00:33:03.403720 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-04-13 00:33:03.441601 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-04-13 00:33:03.441712 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:33:03.442206 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-04-13 00:33:03.483068 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:33:03.558778 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-04-13 00:33:03.558872 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-04-13 00:33:03.558911 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-04-13 00:33:03.559509 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:33:03.560253 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:33:03.562165 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-04-13 00:33:03.562315 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-04-13 00:33:03.562331 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:33:03.562696 | orchestrator | 2025-04-13 00:33:03.566120 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-04-13 00:33:04.039014 | orchestrator | Sunday 13 April 2025 00:33:03 +0000 (0:00:00.320) 0:04:08.417 ********** 2025-04-13 00:33:04.039122 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:33:04.040384 | orchestrator | 2025-04-13 00:33:04.041592 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-04-13 00:33:04.042335 | orchestrator | Sunday 13 April 2025 00:33:04 +0000 (0:00:00.478) 0:04:08.895 ********** 2025-04-13 00:33:04.124635 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-04-13 00:33:04.124985 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-04-13 00:33:04.163400 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:33:04.164042 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-04-13 00:33:04.197193 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:33:04.237718 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:33:04.238665 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-04-13 00:33:04.278208 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-04-13 00:33:04.278514 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:33:04.279004 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-04-13 00:33:04.372302 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:33:04.804372 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:33:04.804491 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-04-13 00:33:04.804510 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:33:04.804525 | orchestrator | 2025-04-13 00:33:04.804539 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-04-13 00:33:04.804553 | orchestrator | Sunday 13 April 2025 00:33:04 +0000 (0:00:00.326) 0:04:09.222 ********** 2025-04-13 00:33:04.804634 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:33:04.805372 | orchestrator | 2025-04-13 00:33:04.807166 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-04-13 00:33:04.807246 | orchestrator | Sunday 13 April 2025 00:33:04 +0000 (0:00:00.436) 0:04:09.659 ********** 2025-04-13 00:33:38.497302 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:33:38.497769 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:33:38.497815 | orchestrator | changed: [testbed-manager] 2025-04-13 00:33:38.499463 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:33:38.501525 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:33:38.502685 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:33:38.503549 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:33:38.504129 | orchestrator | 2025-04-13 00:33:38.505196 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-04-13 00:33:38.505764 | orchestrator | Sunday 13 April 2025 00:33:38 +0000 (0:00:33.685) 0:04:43.345 ********** 2025-04-13 00:33:46.030990 | orchestrator | changed: [testbed-manager] 2025-04-13 00:33:46.031323 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:33:46.031362 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:33:46.032059 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:33:46.034697 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:33:46.035064 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:33:46.035818 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:33:46.036293 | orchestrator | 2025-04-13 00:33:46.036704 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-04-13 00:33:46.037568 | orchestrator | Sunday 13 April 2025 00:33:46 +0000 (0:00:07.538) 0:04:50.884 ********** 2025-04-13 00:33:53.103916 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:33:53.104152 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:33:53.107565 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:33:53.108797 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:33:53.108832 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:33:53.109787 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:33:53.110533 | orchestrator | changed: [testbed-manager] 2025-04-13 00:33:53.111512 | orchestrator | 2025-04-13 00:33:53.112290 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-04-13 00:33:53.113126 | orchestrator | Sunday 13 April 2025 00:33:53 +0000 (0:00:07.075) 0:04:57.959 ********** 2025-04-13 00:33:54.622208 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:33:54.622394 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:33:54.622424 | orchestrator | ok: [testbed-manager] 2025-04-13 00:33:54.622945 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:33:54.624041 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:33:54.624407 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:33:54.625444 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:33:54.625879 | orchestrator | 2025-04-13 00:33:54.626329 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-04-13 00:33:54.626803 | orchestrator | Sunday 13 April 2025 00:33:54 +0000 (0:00:01.517) 0:04:59.477 ********** 2025-04-13 00:34:00.174784 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:34:00.175283 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:34:00.175396 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:34:00.175427 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:34:00.177499 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:34:00.178127 | orchestrator | changed: [testbed-manager] 2025-04-13 00:34:00.179844 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:34:00.179915 | orchestrator | 2025-04-13 00:34:00.180663 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-04-13 00:34:00.181159 | orchestrator | Sunday 13 April 2025 00:34:00 +0000 (0:00:05.553) 0:05:05.031 ********** 2025-04-13 00:34:00.611572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:34:00.612226 | orchestrator | 2025-04-13 00:34:00.612279 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-04-13 00:34:00.616122 | orchestrator | Sunday 13 April 2025 00:34:00 +0000 (0:00:00.437) 0:05:05.468 ********** 2025-04-13 00:34:01.341238 | orchestrator | changed: [testbed-manager] 2025-04-13 00:34:01.344339 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:34:01.344514 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:34:01.344538 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:34:01.344554 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:34:01.344574 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:34:01.344883 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:34:01.345577 | orchestrator | 2025-04-13 00:34:01.346316 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-04-13 00:34:01.351056 | orchestrator | Sunday 13 April 2025 00:34:01 +0000 (0:00:00.728) 0:05:06.196 ********** 2025-04-13 00:34:02.850888 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:34:02.851623 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:34:02.851694 | orchestrator | ok: [testbed-manager] 2025-04-13 00:34:02.851732 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:34:02.852474 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:34:02.853801 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:34:02.854450 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:34:02.855730 | orchestrator | 2025-04-13 00:34:02.856398 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-04-13 00:34:02.857244 | orchestrator | Sunday 13 April 2025 00:34:02 +0000 (0:00:01.508) 0:05:07.704 ********** 2025-04-13 00:34:03.618250 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:34:03.618465 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:34:03.622601 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:34:03.622850 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:34:03.622886 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:34:03.622901 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:34:03.622923 | orchestrator | changed: [testbed-manager] 2025-04-13 00:34:03.623999 | orchestrator | 2025-04-13 00:34:03.625032 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-04-13 00:34:03.626102 | orchestrator | Sunday 13 April 2025 00:34:03 +0000 (0:00:00.769) 0:05:08.474 ********** 2025-04-13 00:34:03.722831 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:34:03.769713 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:34:03.803024 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:34:03.834121 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:34:03.911101 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:34:03.912706 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:34:03.915889 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:34:03.917420 | orchestrator | 2025-04-13 00:34:03.918187 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-04-13 00:34:03.919303 | orchestrator | Sunday 13 April 2025 00:34:03 +0000 (0:00:00.292) 0:05:08.767 ********** 2025-04-13 00:34:03.975234 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:34:04.008871 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:34:04.042128 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:34:04.075634 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:34:04.111322 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:34:04.292845 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:34:04.395175 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:34:04.395294 | orchestrator | 2025-04-13 00:34:04.395314 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-04-13 00:34:04.395331 | orchestrator | Sunday 13 April 2025 00:34:04 +0000 (0:00:00.379) 0:05:09.146 ********** 2025-04-13 00:34:04.395362 | orchestrator | ok: [testbed-manager] 2025-04-13 00:34:04.430894 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:34:04.472766 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:34:04.511943 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:34:04.586866 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:34:04.587221 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:34:04.587256 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:34:04.587469 | orchestrator | 2025-04-13 00:34:04.588172 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-04-13 00:34:04.589260 | orchestrator | Sunday 13 April 2025 00:34:04 +0000 (0:00:00.298) 0:05:09.445 ********** 2025-04-13 00:34:04.670537 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:34:04.745564 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:34:04.784355 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:34:04.822091 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:34:04.856460 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:34:04.935365 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:34:04.935571 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:34:04.936059 | orchestrator | 2025-04-13 00:34:04.936274 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-04-13 00:34:04.936889 | orchestrator | Sunday 13 April 2025 00:34:04 +0000 (0:00:00.346) 0:05:09.792 ********** 2025-04-13 00:34:05.058964 | orchestrator | ok: [testbed-manager] 2025-04-13 00:34:05.094630 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:34:05.133943 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:34:05.189827 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:34:05.287933 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:34:05.291131 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:34:05.291209 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:34:05.291962 | orchestrator | 2025-04-13 00:34:05.292000 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-04-13 00:34:05.292956 | orchestrator | Sunday 13 April 2025 00:34:05 +0000 (0:00:00.351) 0:05:10.144 ********** 2025-04-13 00:34:05.379333 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:34:05.413756 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:34:05.449995 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:34:05.489319 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:34:05.521917 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:34:05.600202 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:34:05.600339 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:34:05.601585 | orchestrator | 2025-04-13 00:34:05.602075 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-04-13 00:34:05.603400 | orchestrator | Sunday 13 April 2025 00:34:05 +0000 (0:00:00.314) 0:05:10.458 ********** 2025-04-13 00:34:05.669835 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:34:05.712246 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:34:05.751583 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:34:05.783954 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:34:05.836806 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:34:06.016173 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:34:06.016362 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:34:06.016395 | orchestrator | 2025-04-13 00:34:06.016411 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-04-13 00:34:06.016434 | orchestrator | Sunday 13 April 2025 00:34:06 +0000 (0:00:00.412) 0:05:10.870 ********** 2025-04-13 00:34:06.476249 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:34:06.476403 | orchestrator | 2025-04-13 00:34:06.476703 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-04-13 00:34:06.477034 | orchestrator | Sunday 13 April 2025 00:34:06 +0000 (0:00:00.461) 0:05:11.332 ********** 2025-04-13 00:34:07.313091 | orchestrator | ok: [testbed-manager] 2025-04-13 00:34:07.313418 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:34:07.313461 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:34:07.314326 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:34:07.314667 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:34:07.315575 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:34:07.317176 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:34:07.317247 | orchestrator | 2025-04-13 00:34:07.317270 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-04-13 00:34:07.317372 | orchestrator | Sunday 13 April 2025 00:34:07 +0000 (0:00:00.833) 0:05:12.166 ********** 2025-04-13 00:34:10.223203 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:34:10.223350 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:34:10.227277 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:34:10.227917 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:34:10.228857 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:34:10.229566 | orchestrator | ok: [testbed-manager] 2025-04-13 00:34:10.230346 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:34:10.230936 | orchestrator | 2025-04-13 00:34:10.233831 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-04-13 00:34:10.234468 | orchestrator | Sunday 13 April 2025 00:34:10 +0000 (0:00:02.913) 0:05:15.079 ********** 2025-04-13 00:34:10.309279 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-04-13 00:34:10.309440 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-04-13 00:34:10.401032 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-04-13 00:34:10.402117 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-04-13 00:34:10.403061 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-04-13 00:34:10.404016 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-04-13 00:34:10.469057 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:34:10.470135 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-04-13 00:34:10.542534 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-04-13 00:34:10.543013 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:34:10.545774 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-04-13 00:34:10.546584 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-04-13 00:34:10.546742 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-04-13 00:34:10.641346 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-04-13 00:34:10.641520 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:34:10.641840 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-04-13 00:34:10.642225 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-04-13 00:34:10.642254 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-04-13 00:34:10.709946 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:34:10.710784 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-04-13 00:34:10.710849 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-04-13 00:34:10.852672 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:34:10.854360 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-04-13 00:34:10.854755 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:34:10.856267 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-04-13 00:34:10.856357 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-04-13 00:34:10.860018 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-04-13 00:34:16.984873 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:34:16.985014 | orchestrator | 2025-04-13 00:34:16.985037 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-04-13 00:34:16.985053 | orchestrator | Sunday 13 April 2025 00:34:10 +0000 (0:00:00.629) 0:05:15.708 ********** 2025-04-13 00:34:16.985084 | orchestrator | ok: [testbed-manager] 2025-04-13 00:34:16.985151 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:34:16.985174 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:34:16.985515 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:34:16.986152 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:34:16.986585 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:34:16.987089 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:34:16.987844 | orchestrator | 2025-04-13 00:34:16.988543 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-04-13 00:34:16.988839 | orchestrator | Sunday 13 April 2025 00:34:16 +0000 (0:00:06.130) 0:05:21.839 ********** 2025-04-13 00:34:18.012908 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:34:18.015239 | orchestrator | ok: [testbed-manager] 2025-04-13 00:34:18.016345 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:34:18.016434 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:34:18.017956 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:34:18.019479 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:34:18.019949 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:34:18.020698 | orchestrator | 2025-04-13 00:34:18.021897 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-04-13 00:34:18.022577 | orchestrator | Sunday 13 April 2025 00:34:18 +0000 (0:00:01.030) 0:05:22.869 ********** 2025-04-13 00:34:25.481882 | orchestrator | ok: [testbed-manager] 2025-04-13 00:34:25.482182 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:34:25.483416 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:34:25.484529 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:34:25.486608 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:34:25.487335 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:34:25.488006 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:34:25.488824 | orchestrator | 2025-04-13 00:34:25.489509 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-04-13 00:34:25.490357 | orchestrator | Sunday 13 April 2025 00:34:25 +0000 (0:00:07.465) 0:05:30.334 ********** 2025-04-13 00:34:28.446105 | orchestrator | changed: [testbed-manager] 2025-04-13 00:34:28.446327 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:34:28.447477 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:34:28.447712 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:34:28.447813 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:34:28.448455 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:34:28.449168 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:34:28.450111 | orchestrator | 2025-04-13 00:34:28.452373 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-04-13 00:34:28.452865 | orchestrator | Sunday 13 April 2025 00:34:28 +0000 (0:00:02.966) 0:05:33.301 ********** 2025-04-13 00:34:29.923375 | orchestrator | ok: [testbed-manager] 2025-04-13 00:34:29.923541 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:34:29.924870 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:34:29.926298 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:34:29.926771 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:34:29.927651 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:34:29.928490 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:34:29.929583 | orchestrator | 2025-04-13 00:34:29.930444 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-04-13 00:34:29.932181 | orchestrator | Sunday 13 April 2025 00:34:29 +0000 (0:00:01.476) 0:05:34.778 ********** 2025-04-13 00:34:31.279504 | orchestrator | ok: [testbed-manager] 2025-04-13 00:34:31.280345 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:34:31.281078 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:34:31.281753 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:34:31.282997 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:34:31.283857 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:34:31.284576 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:34:31.290767 | orchestrator | 2025-04-13 00:34:31.296112 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-04-13 00:34:31.297050 | orchestrator | Sunday 13 April 2025 00:34:31 +0000 (0:00:01.353) 0:05:36.131 ********** 2025-04-13 00:34:31.518237 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:34:31.586258 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:34:31.660497 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:34:31.730732 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:34:31.918822 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:34:31.919203 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:34:31.920849 | orchestrator | changed: [testbed-manager] 2025-04-13 00:34:31.923243 | orchestrator | 2025-04-13 00:34:31.923863 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-04-13 00:34:31.924667 | orchestrator | Sunday 13 April 2025 00:34:31 +0000 (0:00:00.641) 0:05:36.773 ********** 2025-04-13 00:34:41.346390 | orchestrator | ok: [testbed-manager] 2025-04-13 00:34:41.349138 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:34:41.349201 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:34:41.350211 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:34:41.350246 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:34:41.350881 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:34:41.351085 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:34:41.351516 | orchestrator | 2025-04-13 00:34:41.353139 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-04-13 00:34:41.355720 | orchestrator | Sunday 13 April 2025 00:34:41 +0000 (0:00:09.427) 0:05:46.201 ********** 2025-04-13 00:34:42.245444 | orchestrator | changed: [testbed-manager] 2025-04-13 00:34:42.245658 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:34:42.245731 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:34:42.247074 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:34:42.248271 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:34:42.249290 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:34:42.249976 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:34:42.251057 | orchestrator | 2025-04-13 00:34:42.251920 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-04-13 00:34:42.252657 | orchestrator | Sunday 13 April 2025 00:34:42 +0000 (0:00:00.899) 0:05:47.100 ********** 2025-04-13 00:34:54.724430 | orchestrator | ok: [testbed-manager] 2025-04-13 00:35:06.822241 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:35:06.822426 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:35:06.822450 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:35:06.822466 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:35:06.822481 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:35:06.822497 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:35:06.822513 | orchestrator | 2025-04-13 00:35:06.822530 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-04-13 00:35:06.822547 | orchestrator | Sunday 13 April 2025 00:34:54 +0000 (0:00:12.475) 0:05:59.575 ********** 2025-04-13 00:35:06.822582 | orchestrator | ok: [testbed-manager] 2025-04-13 00:35:06.822658 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:35:06.822677 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:35:06.822693 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:35:06.822752 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:35:06.822773 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:35:06.824041 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:35:06.824207 | orchestrator | 2025-04-13 00:35:06.824600 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-04-13 00:35:06.825573 | orchestrator | Sunday 13 April 2025 00:35:06 +0000 (0:00:12.098) 0:06:11.674 ********** 2025-04-13 00:35:07.233890 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-04-13 00:35:07.305147 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-04-13 00:35:08.064925 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-04-13 00:35:08.065325 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-04-13 00:35:08.065681 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-04-13 00:35:08.066177 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-04-13 00:35:08.067267 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-04-13 00:35:08.067607 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-04-13 00:35:08.068571 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-04-13 00:35:08.071749 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-04-13 00:35:08.073033 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-04-13 00:35:08.074107 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-04-13 00:35:08.075488 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-04-13 00:35:08.075924 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-04-13 00:35:08.076874 | orchestrator | 2025-04-13 00:35:08.078165 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-04-13 00:35:08.078970 | orchestrator | Sunday 13 April 2025 00:35:08 +0000 (0:00:01.242) 0:06:12.916 ********** 2025-04-13 00:35:08.207037 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:35:08.268792 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:35:08.333799 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:35:08.403913 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:35:08.470082 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:35:08.577084 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:35:08.577289 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:35:08.578109 | orchestrator | 2025-04-13 00:35:08.578672 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-04-13 00:35:08.582235 | orchestrator | Sunday 13 April 2025 00:35:08 +0000 (0:00:00.516) 0:06:13.433 ********** 2025-04-13 00:35:12.224897 | orchestrator | ok: [testbed-manager] 2025-04-13 00:35:12.225377 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:35:12.226940 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:35:12.227698 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:35:12.227817 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:35:12.230771 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:35:12.231931 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:35:12.232560 | orchestrator | 2025-04-13 00:35:12.233493 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-04-13 00:35:12.233932 | orchestrator | Sunday 13 April 2025 00:35:12 +0000 (0:00:03.646) 0:06:17.080 ********** 2025-04-13 00:35:12.350264 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:35:12.602911 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:35:12.671003 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:35:12.744063 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:35:12.818141 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:35:12.927010 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:35:12.927179 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:35:12.928957 | orchestrator | 2025-04-13 00:35:12.929014 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-04-13 00:35:12.929580 | orchestrator | Sunday 13 April 2025 00:35:12 +0000 (0:00:00.699) 0:06:17.779 ********** 2025-04-13 00:35:12.992646 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-04-13 00:35:13.077042 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-04-13 00:35:13.077169 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:35:13.077234 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-04-13 00:35:13.077257 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-04-13 00:35:13.154619 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:35:13.154988 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-04-13 00:35:13.155039 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-04-13 00:35:13.241630 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:35:13.241833 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-04-13 00:35:13.241861 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-04-13 00:35:13.323820 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:35:13.324000 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-04-13 00:35:13.390373 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-04-13 00:35:13.390586 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-04-13 00:35:13.390942 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-04-13 00:35:13.509655 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:35:13.510958 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:35:13.511494 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-04-13 00:35:13.512488 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-04-13 00:35:13.513554 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:35:13.514141 | orchestrator | 2025-04-13 00:35:13.514641 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-04-13 00:35:13.516387 | orchestrator | Sunday 13 April 2025 00:35:13 +0000 (0:00:00.584) 0:06:18.364 ********** 2025-04-13 00:35:13.653539 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:35:13.720092 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:35:13.794824 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:35:13.874125 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:35:13.939383 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:35:14.056193 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:35:14.056851 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:35:14.056907 | orchestrator | 2025-04-13 00:35:14.057709 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-04-13 00:35:14.058449 | orchestrator | Sunday 13 April 2025 00:35:14 +0000 (0:00:00.546) 0:06:18.911 ********** 2025-04-13 00:35:14.185357 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:35:14.262464 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:35:14.328320 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:35:14.395134 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:35:14.469371 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:35:14.560590 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:35:14.563229 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:35:14.563803 | orchestrator | 2025-04-13 00:35:14.564460 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-04-13 00:35:14.565154 | orchestrator | Sunday 13 April 2025 00:35:14 +0000 (0:00:00.502) 0:06:19.413 ********** 2025-04-13 00:35:14.709551 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:35:14.781001 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:35:14.852250 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:35:14.926174 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:35:15.007830 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:35:15.136798 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:35:15.137923 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:35:15.139416 | orchestrator | 2025-04-13 00:35:15.140072 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-04-13 00:35:15.141191 | orchestrator | Sunday 13 April 2025 00:35:15 +0000 (0:00:00.579) 0:06:19.993 ********** 2025-04-13 00:35:21.067186 | orchestrator | ok: [testbed-manager] 2025-04-13 00:35:21.070090 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:35:21.070625 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:35:21.070673 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:35:21.071573 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:35:21.072526 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:35:21.073434 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:35:21.074137 | orchestrator | 2025-04-13 00:35:21.076397 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-04-13 00:35:21.076923 | orchestrator | Sunday 13 April 2025 00:35:21 +0000 (0:00:05.928) 0:06:25.921 ********** 2025-04-13 00:35:21.905609 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:35:21.905825 | orchestrator | 2025-04-13 00:35:21.906636 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-04-13 00:35:21.907759 | orchestrator | Sunday 13 April 2025 00:35:21 +0000 (0:00:00.841) 0:06:26.763 ********** 2025-04-13 00:35:22.784867 | orchestrator | ok: [testbed-manager] 2025-04-13 00:35:22.785047 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:35:22.786111 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:35:22.787644 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:35:22.788408 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:35:22.789269 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:35:22.789926 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:35:22.790301 | orchestrator | 2025-04-13 00:35:22.790799 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-04-13 00:35:22.791327 | orchestrator | Sunday 13 April 2025 00:35:22 +0000 (0:00:00.877) 0:06:27.640 ********** 2025-04-13 00:35:23.415602 | orchestrator | ok: [testbed-manager] 2025-04-13 00:35:23.843860 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:35:23.844021 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:35:23.844049 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:35:23.845984 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:35:23.846087 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:35:23.846116 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:35:23.846173 | orchestrator | 2025-04-13 00:35:23.846513 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-04-13 00:35:23.846903 | orchestrator | Sunday 13 April 2025 00:35:23 +0000 (0:00:01.058) 0:06:28.699 ********** 2025-04-13 00:35:25.172240 | orchestrator | ok: [testbed-manager] 2025-04-13 00:35:25.172977 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:35:25.174730 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:35:25.175609 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:35:25.175812 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:35:25.177571 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:35:25.178172 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:35:25.179392 | orchestrator | 2025-04-13 00:35:25.179803 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-04-13 00:35:25.180839 | orchestrator | Sunday 13 April 2025 00:35:25 +0000 (0:00:01.327) 0:06:30.026 ********** 2025-04-13 00:35:25.308141 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:35:26.529504 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:35:26.530223 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:35:26.530953 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:35:26.532164 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:35:26.532900 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:35:26.533460 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:35:26.534433 | orchestrator | 2025-04-13 00:35:26.535585 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-04-13 00:35:26.536572 | orchestrator | Sunday 13 April 2025 00:35:26 +0000 (0:00:01.357) 0:06:31.384 ********** 2025-04-13 00:35:27.867698 | orchestrator | ok: [testbed-manager] 2025-04-13 00:35:27.868095 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:35:27.868677 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:35:27.869815 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:35:27.870356 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:35:27.871505 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:35:27.872142 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:35:27.873141 | orchestrator | 2025-04-13 00:35:27.874388 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-04-13 00:35:27.875706 | orchestrator | Sunday 13 April 2025 00:35:27 +0000 (0:00:01.336) 0:06:32.720 ********** 2025-04-13 00:35:29.245658 | orchestrator | changed: [testbed-manager] 2025-04-13 00:35:29.245912 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:35:29.246496 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:35:29.247486 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:35:29.248077 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:35:29.249143 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:35:29.249632 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:35:29.250927 | orchestrator | 2025-04-13 00:35:29.251001 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-04-13 00:35:29.251725 | orchestrator | Sunday 13 April 2025 00:35:29 +0000 (0:00:01.380) 0:06:34.101 ********** 2025-04-13 00:35:30.348914 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:35:30.349089 | orchestrator | 2025-04-13 00:35:30.350059 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-04-13 00:35:30.351262 | orchestrator | Sunday 13 April 2025 00:35:30 +0000 (0:00:01.102) 0:06:35.203 ********** 2025-04-13 00:35:31.725078 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:35:31.725289 | orchestrator | ok: [testbed-manager] 2025-04-13 00:35:31.725321 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:35:31.726579 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:35:31.727196 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:35:31.728253 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:35:31.728827 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:35:31.729330 | orchestrator | 2025-04-13 00:35:31.730385 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-04-13 00:35:31.730914 | orchestrator | Sunday 13 April 2025 00:35:31 +0000 (0:00:01.374) 0:06:36.577 ********** 2025-04-13 00:35:32.846728 | orchestrator | ok: [testbed-manager] 2025-04-13 00:35:32.847409 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:35:32.847818 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:35:32.848901 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:35:32.849765 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:35:32.850871 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:35:32.851886 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:35:32.852018 | orchestrator | 2025-04-13 00:35:32.852921 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-04-13 00:35:32.853768 | orchestrator | Sunday 13 April 2025 00:35:32 +0000 (0:00:01.125) 0:06:37.703 ********** 2025-04-13 00:35:33.947567 | orchestrator | ok: [testbed-manager] 2025-04-13 00:35:33.948532 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:35:33.948585 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:35:33.949280 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:35:33.950276 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:35:33.950775 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:35:33.951639 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:35:33.952251 | orchestrator | 2025-04-13 00:35:33.953064 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-04-13 00:35:33.953697 | orchestrator | Sunday 13 April 2025 00:35:33 +0000 (0:00:01.097) 0:06:38.801 ********** 2025-04-13 00:35:35.312632 | orchestrator | ok: [testbed-manager] 2025-04-13 00:35:35.313080 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:35:35.313131 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:35:35.313970 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:35:35.315515 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:35:35.315582 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:35:35.316392 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:35:35.317813 | orchestrator | 2025-04-13 00:35:35.318499 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-04-13 00:35:35.319170 | orchestrator | Sunday 13 April 2025 00:35:35 +0000 (0:00:01.367) 0:06:40.168 ********** 2025-04-13 00:35:36.486423 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:35:36.487038 | orchestrator | 2025-04-13 00:35:36.488398 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-13 00:35:36.490502 | orchestrator | Sunday 13 April 2025 00:35:36 +0000 (0:00:00.883) 0:06:41.052 ********** 2025-04-13 00:35:36.491644 | orchestrator | 2025-04-13 00:35:36.492368 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-13 00:35:36.493241 | orchestrator | Sunday 13 April 2025 00:35:36 +0000 (0:00:00.037) 0:06:41.089 ********** 2025-04-13 00:35:36.493892 | orchestrator | 2025-04-13 00:35:36.494160 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-13 00:35:36.494874 | orchestrator | Sunday 13 April 2025 00:35:36 +0000 (0:00:00.037) 0:06:41.127 ********** 2025-04-13 00:35:36.495278 | orchestrator | 2025-04-13 00:35:36.495962 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-13 00:35:36.496639 | orchestrator | Sunday 13 April 2025 00:35:36 +0000 (0:00:00.047) 0:06:41.174 ********** 2025-04-13 00:35:36.497113 | orchestrator | 2025-04-13 00:35:36.497733 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-13 00:35:36.498354 | orchestrator | Sunday 13 April 2025 00:35:36 +0000 (0:00:00.039) 0:06:41.214 ********** 2025-04-13 00:35:36.498981 | orchestrator | 2025-04-13 00:35:36.499606 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-13 00:35:36.499866 | orchestrator | Sunday 13 April 2025 00:35:36 +0000 (0:00:00.039) 0:06:41.254 ********** 2025-04-13 00:35:36.500444 | orchestrator | 2025-04-13 00:35:36.500972 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-13 00:35:36.501252 | orchestrator | Sunday 13 April 2025 00:35:36 +0000 (0:00:00.045) 0:06:41.300 ********** 2025-04-13 00:35:36.501987 | orchestrator | 2025-04-13 00:35:36.502677 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-13 00:35:36.502986 | orchestrator | Sunday 13 April 2025 00:35:36 +0000 (0:00:00.039) 0:06:41.339 ********** 2025-04-13 00:35:37.516591 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:35:37.516943 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:35:37.517095 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:35:37.517791 | orchestrator | 2025-04-13 00:35:37.518295 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-04-13 00:35:37.518551 | orchestrator | Sunday 13 April 2025 00:35:37 +0000 (0:00:01.031) 0:06:42.371 ********** 2025-04-13 00:35:39.091390 | orchestrator | changed: [testbed-manager] 2025-04-13 00:35:39.094077 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:35:39.096461 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:35:39.096740 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:35:39.097270 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:35:39.098103 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:35:39.098889 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:35:39.099964 | orchestrator | 2025-04-13 00:35:39.100211 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-04-13 00:35:39.100241 | orchestrator | Sunday 13 April 2025 00:35:39 +0000 (0:00:01.574) 0:06:43.945 ********** 2025-04-13 00:35:40.221342 | orchestrator | changed: [testbed-manager] 2025-04-13 00:35:40.221525 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:35:40.222481 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:35:40.223633 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:35:40.224488 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:35:40.225168 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:35:40.226003 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:35:40.227183 | orchestrator | 2025-04-13 00:35:40.227634 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-04-13 00:35:40.227933 | orchestrator | Sunday 13 April 2025 00:35:40 +0000 (0:00:01.132) 0:06:45.077 ********** 2025-04-13 00:35:40.360849 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:35:42.206879 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:35:42.207057 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:35:42.207375 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:35:42.207951 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:35:42.209323 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:35:42.211460 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:35:42.212409 | orchestrator | 2025-04-13 00:35:42.213019 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-04-13 00:35:42.214106 | orchestrator | Sunday 13 April 2025 00:35:42 +0000 (0:00:01.982) 0:06:47.060 ********** 2025-04-13 00:35:42.310663 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:35:42.311515 | orchestrator | 2025-04-13 00:35:42.312110 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-04-13 00:35:42.313198 | orchestrator | Sunday 13 April 2025 00:35:42 +0000 (0:00:00.108) 0:06:47.168 ********** 2025-04-13 00:35:43.436370 | orchestrator | ok: [testbed-manager] 2025-04-13 00:35:43.436739 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:35:43.437528 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:35:43.438205 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:35:43.439594 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:35:43.440540 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:35:43.441710 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:35:43.442931 | orchestrator | 2025-04-13 00:35:43.445850 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-04-13 00:35:43.449083 | orchestrator | Sunday 13 April 2025 00:35:43 +0000 (0:00:01.121) 0:06:48.290 ********** 2025-04-13 00:35:43.596738 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:35:43.670120 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:35:43.734835 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:35:44.007638 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:35:44.079661 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:35:44.219528 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:35:44.221478 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:35:44.221589 | orchestrator | 2025-04-13 00:35:44.221657 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-04-13 00:35:44.224963 | orchestrator | Sunday 13 April 2025 00:35:44 +0000 (0:00:00.784) 0:06:49.075 ********** 2025-04-13 00:35:45.129554 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:35:45.129934 | orchestrator | 2025-04-13 00:35:45.130335 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-04-13 00:35:45.130733 | orchestrator | Sunday 13 April 2025 00:35:45 +0000 (0:00:00.908) 0:06:49.984 ********** 2025-04-13 00:35:45.564891 | orchestrator | ok: [testbed-manager] 2025-04-13 00:35:45.950271 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:35:45.950447 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:35:45.953039 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:35:45.954159 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:35:45.954464 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:35:45.955679 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:35:45.956313 | orchestrator | 2025-04-13 00:35:45.956843 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-04-13 00:35:45.957438 | orchestrator | Sunday 13 April 2025 00:35:45 +0000 (0:00:00.822) 0:06:50.807 ********** 2025-04-13 00:35:48.403900 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-04-13 00:35:48.404091 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-04-13 00:35:48.406012 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-04-13 00:35:48.406916 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-04-13 00:35:48.413068 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-04-13 00:35:48.418747 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-04-13 00:35:48.419581 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-04-13 00:35:48.419633 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-04-13 00:35:48.419651 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-04-13 00:35:48.419674 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-04-13 00:35:48.422576 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-04-13 00:35:48.423455 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-04-13 00:35:48.423952 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-04-13 00:35:48.427024 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-04-13 00:35:48.427545 | orchestrator | 2025-04-13 00:35:48.427600 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-04-13 00:35:48.428944 | orchestrator | Sunday 13 April 2025 00:35:48 +0000 (0:00:02.450) 0:06:53.257 ********** 2025-04-13 00:35:48.544560 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:35:48.609913 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:35:48.674150 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:35:48.751140 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:35:48.812608 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:35:48.935283 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:35:48.935523 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:35:48.935570 | orchestrator | 2025-04-13 00:35:48.936577 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-04-13 00:35:48.937058 | orchestrator | Sunday 13 April 2025 00:35:48 +0000 (0:00:00.532) 0:06:53.790 ********** 2025-04-13 00:35:49.757260 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:35:49.757894 | orchestrator | 2025-04-13 00:35:49.758401 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-04-13 00:35:49.759427 | orchestrator | Sunday 13 April 2025 00:35:49 +0000 (0:00:00.822) 0:06:54.613 ********** 2025-04-13 00:35:50.187191 | orchestrator | ok: [testbed-manager] 2025-04-13 00:35:50.584611 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:35:50.584862 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:35:50.584903 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:35:50.584919 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:35:50.584933 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:35:50.584947 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:35:50.584961 | orchestrator | 2025-04-13 00:35:50.584977 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-04-13 00:35:50.584999 | orchestrator | Sunday 13 April 2025 00:35:50 +0000 (0:00:00.826) 0:06:55.439 ********** 2025-04-13 00:35:51.012509 | orchestrator | ok: [testbed-manager] 2025-04-13 00:35:51.233633 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:35:51.591056 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:35:51.594504 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:35:51.595851 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:35:51.595899 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:35:51.595935 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:35:51.596031 | orchestrator | 2025-04-13 00:35:51.596686 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-04-13 00:35:51.597272 | orchestrator | Sunday 13 April 2025 00:35:51 +0000 (0:00:01.007) 0:06:56.447 ********** 2025-04-13 00:35:51.720795 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:35:51.789468 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:35:51.869288 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:35:51.962476 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:35:52.033724 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:35:52.141864 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:35:52.142883 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:35:52.143423 | orchestrator | 2025-04-13 00:35:52.146995 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-04-13 00:35:53.326714 | orchestrator | Sunday 13 April 2025 00:35:52 +0000 (0:00:00.550) 0:06:56.997 ********** 2025-04-13 00:35:53.326907 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:35:53.327466 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:35:53.327508 | orchestrator | ok: [testbed-manager] 2025-04-13 00:35:53.327584 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:35:53.328358 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:35:53.328857 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:35:53.329829 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:35:53.331884 | orchestrator | 2025-04-13 00:35:53.332890 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-04-13 00:35:53.333857 | orchestrator | Sunday 13 April 2025 00:35:53 +0000 (0:00:01.181) 0:06:58.179 ********** 2025-04-13 00:35:53.457251 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:35:53.522823 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:35:53.592345 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:35:53.665827 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:35:53.731216 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:35:53.856796 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:35:53.857450 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:35:53.858658 | orchestrator | 2025-04-13 00:35:53.859500 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-04-13 00:35:53.860097 | orchestrator | Sunday 13 April 2025 00:35:53 +0000 (0:00:00.535) 0:06:58.714 ********** 2025-04-13 00:35:55.548981 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:35:55.549667 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:35:55.551430 | orchestrator | ok: [testbed-manager] 2025-04-13 00:35:55.552600 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:35:55.553971 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:35:55.555009 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:35:55.555447 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:35:55.556523 | orchestrator | 2025-04-13 00:35:55.559076 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-04-13 00:35:55.564026 | orchestrator | Sunday 13 April 2025 00:35:55 +0000 (0:00:01.690) 0:07:00.404 ********** 2025-04-13 00:35:56.814331 | orchestrator | ok: [testbed-manager] 2025-04-13 00:35:56.814879 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:35:56.815939 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:35:56.819011 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:35:56.819190 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:35:56.819214 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:35:56.819228 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:35:56.819243 | orchestrator | 2025-04-13 00:35:56.819262 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-04-13 00:35:56.819882 | orchestrator | Sunday 13 April 2025 00:35:56 +0000 (0:00:01.264) 0:07:01.669 ********** 2025-04-13 00:35:58.542751 | orchestrator | ok: [testbed-manager] 2025-04-13 00:35:58.542997 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:35:58.544668 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:35:58.546481 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:35:58.547463 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:35:58.548150 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:35:58.549090 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:35:58.549956 | orchestrator | 2025-04-13 00:35:58.550638 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-04-13 00:35:58.551417 | orchestrator | Sunday 13 April 2025 00:35:58 +0000 (0:00:01.729) 0:07:03.398 ********** 2025-04-13 00:36:00.138850 | orchestrator | ok: [testbed-manager] 2025-04-13 00:36:00.138989 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:36:00.139012 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:36:00.139506 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:36:00.140605 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:36:00.140804 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:36:00.141017 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:36:00.141057 | orchestrator | 2025-04-13 00:36:00.141090 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-13 00:36:00.141360 | orchestrator | Sunday 13 April 2025 00:36:00 +0000 (0:00:01.595) 0:07:04.994 ********** 2025-04-13 00:36:00.717991 | orchestrator | ok: [testbed-manager] 2025-04-13 00:36:00.795419 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:36:01.228482 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:36:01.229009 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:36:01.229045 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:36:01.229269 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:36:01.230555 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:36:01.231143 | orchestrator | 2025-04-13 00:36:01.231815 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-13 00:36:01.232212 | orchestrator | Sunday 13 April 2025 00:36:01 +0000 (0:00:01.088) 0:07:06.083 ********** 2025-04-13 00:36:01.355383 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:36:01.436180 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:36:01.512218 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:36:01.593988 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:36:01.673603 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:36:02.091683 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:36:02.091901 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:36:02.092207 | orchestrator | 2025-04-13 00:36:02.092838 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-04-13 00:36:02.093447 | orchestrator | Sunday 13 April 2025 00:36:02 +0000 (0:00:00.864) 0:07:06.948 ********** 2025-04-13 00:36:02.234871 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:36:02.298305 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:36:02.374325 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:36:02.441591 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:36:02.502508 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:36:02.605218 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:36:02.606007 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:36:02.607098 | orchestrator | 2025-04-13 00:36:02.610141 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-04-13 00:36:02.742241 | orchestrator | Sunday 13 April 2025 00:36:02 +0000 (0:00:00.514) 0:07:07.462 ********** 2025-04-13 00:36:02.742392 | orchestrator | ok: [testbed-manager] 2025-04-13 00:36:02.813130 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:36:02.882396 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:36:02.958563 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:36:03.018811 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:36:03.123503 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:36:03.124909 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:36:03.125343 | orchestrator | 2025-04-13 00:36:03.128929 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-04-13 00:36:03.129629 | orchestrator | Sunday 13 April 2025 00:36:03 +0000 (0:00:00.516) 0:07:07.979 ********** 2025-04-13 00:36:03.443294 | orchestrator | ok: [testbed-manager] 2025-04-13 00:36:03.506870 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:36:03.579233 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:36:03.649103 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:36:03.715211 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:36:03.823655 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:36:03.824511 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:36:03.825188 | orchestrator | 2025-04-13 00:36:03.826005 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-04-13 00:36:03.826721 | orchestrator | Sunday 13 April 2025 00:36:03 +0000 (0:00:00.700) 0:07:08.679 ********** 2025-04-13 00:36:03.973330 | orchestrator | ok: [testbed-manager] 2025-04-13 00:36:04.041838 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:36:04.112746 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:36:04.178589 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:36:04.249818 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:36:04.412555 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:36:10.194672 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:36:10.194916 | orchestrator | 2025-04-13 00:36:10.194957 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-04-13 00:36:10.194984 | orchestrator | Sunday 13 April 2025 00:36:04 +0000 (0:00:00.583) 0:07:09.263 ********** 2025-04-13 00:36:10.195031 | orchestrator | ok: [testbed-manager] 2025-04-13 00:36:10.195290 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:36:10.196055 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:36:10.197572 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:36:10.199199 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:36:10.200069 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:36:10.200865 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:36:10.201701 | orchestrator | 2025-04-13 00:36:10.203009 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-04-13 00:36:10.203299 | orchestrator | Sunday 13 April 2025 00:36:10 +0000 (0:00:05.786) 0:07:15.050 ********** 2025-04-13 00:36:10.332002 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:36:10.400395 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:36:10.467914 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:36:10.539557 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:36:10.603027 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:36:10.751976 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:36:10.753155 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:36:10.753244 | orchestrator | 2025-04-13 00:36:10.754204 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-04-13 00:36:10.757541 | orchestrator | Sunday 13 April 2025 00:36:10 +0000 (0:00:00.556) 0:07:15.607 ********** 2025-04-13 00:36:11.804087 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:36:11.804536 | orchestrator | 2025-04-13 00:36:11.804552 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-04-13 00:36:11.807381 | orchestrator | Sunday 13 April 2025 00:36:11 +0000 (0:00:01.051) 0:07:16.658 ********** 2025-04-13 00:36:13.521074 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:36:13.521346 | orchestrator | ok: [testbed-manager] 2025-04-13 00:36:13.522424 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:36:13.522625 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:36:13.523350 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:36:13.529136 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:36:13.531545 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:36:13.532424 | orchestrator | 2025-04-13 00:36:13.533126 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-04-13 00:36:13.536308 | orchestrator | Sunday 13 April 2025 00:36:13 +0000 (0:00:01.717) 0:07:18.375 ********** 2025-04-13 00:36:14.721892 | orchestrator | ok: [testbed-manager] 2025-04-13 00:36:14.722125 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:36:14.722572 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:36:14.723256 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:36:14.724048 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:36:14.724315 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:36:14.725419 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:36:14.725546 | orchestrator | 2025-04-13 00:36:14.726071 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-04-13 00:36:14.726603 | orchestrator | Sunday 13 April 2025 00:36:14 +0000 (0:00:01.202) 0:07:19.578 ********** 2025-04-13 00:36:15.613899 | orchestrator | ok: [testbed-manager] 2025-04-13 00:36:15.614147 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:36:15.614180 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:36:15.614593 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:36:15.614880 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:36:15.615447 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:36:15.615612 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:36:15.616014 | orchestrator | 2025-04-13 00:36:15.616526 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-04-13 00:36:15.616559 | orchestrator | Sunday 13 April 2025 00:36:15 +0000 (0:00:00.892) 0:07:20.470 ********** 2025-04-13 00:36:17.531879 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-13 00:36:17.533159 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-13 00:36:17.533361 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-13 00:36:17.535018 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-13 00:36:17.540527 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-13 00:36:17.541593 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-13 00:36:17.544240 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-13 00:36:18.341564 | orchestrator | 2025-04-13 00:36:18.341671 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-04-13 00:36:18.341686 | orchestrator | Sunday 13 April 2025 00:36:17 +0000 (0:00:01.915) 0:07:22.386 ********** 2025-04-13 00:36:18.341729 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:36:18.341841 | orchestrator | 2025-04-13 00:36:18.345270 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-04-13 00:36:27.113919 | orchestrator | Sunday 13 April 2025 00:36:18 +0000 (0:00:00.809) 0:07:23.195 ********** 2025-04-13 00:36:27.114158 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:36:27.114254 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:36:27.117325 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:36:27.119480 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:36:27.119522 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:36:27.119545 | orchestrator | changed: [testbed-manager] 2025-04-13 00:36:27.119564 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:36:27.119619 | orchestrator | 2025-04-13 00:36:27.120441 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-04-13 00:36:27.121322 | orchestrator | Sunday 13 April 2025 00:36:27 +0000 (0:00:08.771) 0:07:31.967 ********** 2025-04-13 00:36:28.840079 | orchestrator | ok: [testbed-manager] 2025-04-13 00:36:28.840846 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:36:28.840893 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:36:28.841814 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:36:28.842513 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:36:28.843011 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:36:28.844198 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:36:28.844371 | orchestrator | 2025-04-13 00:36:28.845452 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-04-13 00:36:30.104868 | orchestrator | Sunday 13 April 2025 00:36:28 +0000 (0:00:01.727) 0:07:33.694 ********** 2025-04-13 00:36:30.105010 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:36:30.105462 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:36:30.109244 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:36:30.110085 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:36:30.110129 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:36:30.110156 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:36:30.111118 | orchestrator | 2025-04-13 00:36:30.112099 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-04-13 00:36:30.112927 | orchestrator | Sunday 13 April 2025 00:36:30 +0000 (0:00:01.264) 0:07:34.959 ********** 2025-04-13 00:36:31.576674 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:36:31.577380 | orchestrator | changed: [testbed-manager] 2025-04-13 00:36:31.577451 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:36:31.577622 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:36:31.579191 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:36:31.580189 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:36:31.580705 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:36:31.580873 | orchestrator | 2025-04-13 00:36:31.581169 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-04-13 00:36:31.581389 | orchestrator | 2025-04-13 00:36:31.584351 | orchestrator | TASK [Include hardening role] ************************************************** 2025-04-13 00:36:31.715492 | orchestrator | Sunday 13 April 2025 00:36:31 +0000 (0:00:01.474) 0:07:36.433 ********** 2025-04-13 00:36:31.715624 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:36:31.782480 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:36:31.852225 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:36:31.916129 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:36:31.979712 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:36:32.111859 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:36:32.112934 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:36:32.113543 | orchestrator | 2025-04-13 00:36:32.118862 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-04-13 00:36:32.120034 | orchestrator | 2025-04-13 00:36:32.120620 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-04-13 00:36:32.121761 | orchestrator | Sunday 13 April 2025 00:36:32 +0000 (0:00:00.533) 0:07:36.967 ********** 2025-04-13 00:36:33.409386 | orchestrator | changed: [testbed-manager] 2025-04-13 00:36:33.409615 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:36:33.409671 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:36:33.410623 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:36:33.412832 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:36:33.414091 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:36:33.415256 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:36:33.416107 | orchestrator | 2025-04-13 00:36:33.416715 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-04-13 00:36:33.417504 | orchestrator | Sunday 13 April 2025 00:36:33 +0000 (0:00:01.294) 0:07:38.262 ********** 2025-04-13 00:36:34.799113 | orchestrator | ok: [testbed-manager] 2025-04-13 00:36:34.799839 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:36:34.800281 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:36:34.802404 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:36:34.803970 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:36:34.808770 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:36:34.809666 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:36:34.811028 | orchestrator | 2025-04-13 00:36:34.814074 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-04-13 00:36:34.817200 | orchestrator | Sunday 13 April 2025 00:36:34 +0000 (0:00:01.391) 0:07:39.654 ********** 2025-04-13 00:36:34.953660 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:36:35.251738 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:36:35.315894 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:36:35.379488 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:36:35.450113 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:36:35.829299 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:36:35.829529 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:36:35.830959 | orchestrator | 2025-04-13 00:36:35.832854 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-04-13 00:36:35.834077 | orchestrator | Sunday 13 April 2025 00:36:35 +0000 (0:00:01.029) 0:07:40.683 ********** 2025-04-13 00:36:37.152329 | orchestrator | changed: [testbed-manager] 2025-04-13 00:36:37.153947 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:36:37.156503 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:36:37.157612 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:36:37.157647 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:36:37.158495 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:36:37.159410 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:36:37.160381 | orchestrator | 2025-04-13 00:36:37.161284 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-04-13 00:36:37.161610 | orchestrator | 2025-04-13 00:36:37.162550 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-04-13 00:36:37.162903 | orchestrator | Sunday 13 April 2025 00:36:37 +0000 (0:00:01.324) 0:07:42.007 ********** 2025-04-13 00:36:38.163851 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:36:38.164819 | orchestrator | 2025-04-13 00:36:38.165701 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-04-13 00:36:38.166892 | orchestrator | Sunday 13 April 2025 00:36:38 +0000 (0:00:01.010) 0:07:43.018 ********** 2025-04-13 00:36:38.577434 | orchestrator | ok: [testbed-manager] 2025-04-13 00:36:38.986138 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:36:38.986834 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:36:38.987232 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:36:38.988290 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:36:38.989967 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:36:38.990360 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:36:38.991723 | orchestrator | 2025-04-13 00:36:38.992211 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-04-13 00:36:38.993209 | orchestrator | Sunday 13 April 2025 00:36:38 +0000 (0:00:00.821) 0:07:43.840 ********** 2025-04-13 00:36:40.149286 | orchestrator | changed: [testbed-manager] 2025-04-13 00:36:40.151893 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:36:40.154175 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:36:40.155342 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:36:40.156237 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:36:40.157685 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:36:40.158392 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:36:40.159069 | orchestrator | 2025-04-13 00:36:40.161271 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-04-13 00:36:41.141301 | orchestrator | Sunday 13 April 2025 00:36:40 +0000 (0:00:01.163) 0:07:45.003 ********** 2025-04-13 00:36:41.141514 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:36:41.141633 | orchestrator | 2025-04-13 00:36:41.143583 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-04-13 00:36:41.145000 | orchestrator | Sunday 13 April 2025 00:36:41 +0000 (0:00:00.991) 0:07:45.995 ********** 2025-04-13 00:36:41.538350 | orchestrator | ok: [testbed-manager] 2025-04-13 00:36:41.975075 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:36:41.976701 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:36:41.977111 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:36:41.978419 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:36:41.979514 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:36:41.980553 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:36:41.981916 | orchestrator | 2025-04-13 00:36:41.982876 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-04-13 00:36:41.983313 | orchestrator | Sunday 13 April 2025 00:36:41 +0000 (0:00:00.834) 0:07:46.830 ********** 2025-04-13 00:36:42.383199 | orchestrator | changed: [testbed-manager] 2025-04-13 00:36:43.075461 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:36:43.078264 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:36:43.079182 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:36:43.079213 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:36:43.079233 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:36:43.080157 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:36:43.080589 | orchestrator | 2025-04-13 00:36:43.082272 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:36:43.082325 | orchestrator | 2025-04-13 00:36:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-13 00:36:43.083299 | orchestrator | 2025-04-13 00:36:43 | INFO  | Please wait and do not abort execution. 2025-04-13 00:36:43.083340 | orchestrator | testbed-manager : ok=160  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-04-13 00:36:43.084071 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-13 00:36:43.085231 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-13 00:36:43.085938 | orchestrator | testbed-node-2 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-13 00:36:43.086746 | orchestrator | testbed-node-3 : ok=167  changed=62  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-04-13 00:36:43.087126 | orchestrator | testbed-node-4 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-13 00:36:43.087910 | orchestrator | testbed-node-5 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-13 00:36:43.088519 | orchestrator | 2025-04-13 00:36:43.089009 | orchestrator | Sunday 13 April 2025 00:36:43 +0000 (0:00:01.101) 0:07:47.931 ********** 2025-04-13 00:36:43.089576 | orchestrator | =============================================================================== 2025-04-13 00:36:43.090520 | orchestrator | osism.commons.packages : Install required packages --------------------- 81.13s 2025-04-13 00:36:43.090645 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.93s 2025-04-13 00:36:43.090890 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.69s 2025-04-13 00:36:43.091432 | orchestrator | osism.commons.repository : Update package cache ------------------------ 12.63s 2025-04-13 00:36:43.091900 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 12.48s 2025-04-13 00:36:43.092305 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.29s 2025-04-13 00:36:43.092705 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.22s 2025-04-13 00:36:43.093384 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.10s 2025-04-13 00:36:43.093634 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.43s 2025-04-13 00:36:43.094322 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.77s 2025-04-13 00:36:43.094702 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.10s 2025-04-13 00:36:43.094983 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.54s 2025-04-13 00:36:43.095439 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.47s 2025-04-13 00:36:43.095932 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.39s 2025-04-13 00:36:43.096412 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.08s 2025-04-13 00:36:43.096735 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.13s 2025-04-13 00:36:43.097504 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 5.93s 2025-04-13 00:36:43.097685 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.83s 2025-04-13 00:36:43.098195 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.79s 2025-04-13 00:36:43.098580 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.76s 2025-04-13 00:36:43.848550 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-04-13 00:36:45.829896 | orchestrator | + osism apply network 2025-04-13 00:36:45.830091 | orchestrator | 2025-04-13 00:36:45 | INFO  | Task 5574ac74-180e-43b3-a005-f6478fe4197e (network) was prepared for execution. 2025-04-13 00:36:49.226898 | orchestrator | 2025-04-13 00:36:45 | INFO  | It takes a moment until task 5574ac74-180e-43b3-a005-f6478fe4197e (network) has been started and output is visible here. 2025-04-13 00:36:49.227026 | orchestrator | 2025-04-13 00:36:49.231143 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-04-13 00:36:49.231721 | orchestrator | 2025-04-13 00:36:49.232845 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-04-13 00:36:49.233279 | orchestrator | Sunday 13 April 2025 00:36:49 +0000 (0:00:00.222) 0:00:00.222 ********** 2025-04-13 00:36:49.374988 | orchestrator | ok: [testbed-manager] 2025-04-13 00:36:49.449427 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:36:49.533906 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:36:49.610148 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:36:49.687491 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:36:49.935881 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:36:49.936098 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:36:49.936126 | orchestrator | 2025-04-13 00:36:49.936560 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-04-13 00:36:49.937441 | orchestrator | Sunday 13 April 2025 00:36:49 +0000 (0:00:00.710) 0:00:00.932 ********** 2025-04-13 00:36:51.204078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:36:51.204485 | orchestrator | 2025-04-13 00:36:51.208649 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-04-13 00:36:53.192855 | orchestrator | Sunday 13 April 2025 00:36:51 +0000 (0:00:01.266) 0:00:02.199 ********** 2025-04-13 00:36:53.193014 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:36:53.193185 | orchestrator | ok: [testbed-manager] 2025-04-13 00:36:53.195168 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:36:53.195782 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:36:53.196658 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:36:53.198301 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:36:53.199999 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:36:53.200497 | orchestrator | 2025-04-13 00:36:53.201432 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-04-13 00:36:53.202793 | orchestrator | Sunday 13 April 2025 00:36:53 +0000 (0:00:01.992) 0:00:04.191 ********** 2025-04-13 00:36:54.917390 | orchestrator | ok: [testbed-manager] 2025-04-13 00:36:54.918364 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:36:54.919659 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:36:54.920932 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:36:54.921728 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:36:54.922909 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:36:54.923736 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:36:54.924543 | orchestrator | 2025-04-13 00:36:54.924973 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-04-13 00:36:54.926209 | orchestrator | Sunday 13 April 2025 00:36:54 +0000 (0:00:01.720) 0:00:05.911 ********** 2025-04-13 00:36:55.465872 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-04-13 00:36:55.466126 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-04-13 00:36:55.466915 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-04-13 00:36:56.077300 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-04-13 00:36:56.081695 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-04-13 00:36:57.912137 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-04-13 00:36:57.912259 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-04-13 00:36:57.912279 | orchestrator | 2025-04-13 00:36:57.912296 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-04-13 00:36:57.912311 | orchestrator | Sunday 13 April 2025 00:36:56 +0000 (0:00:01.160) 0:00:07.071 ********** 2025-04-13 00:36:57.912342 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-13 00:36:57.913875 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-13 00:36:57.913910 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-13 00:36:57.917067 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-13 00:36:57.917880 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-13 00:36:57.918248 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-13 00:36:57.919460 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-13 00:36:57.919972 | orchestrator | 2025-04-13 00:36:57.921158 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-04-13 00:36:57.923133 | orchestrator | Sunday 13 April 2025 00:36:57 +0000 (0:00:01.838) 0:00:08.910 ********** 2025-04-13 00:36:59.608466 | orchestrator | changed: [testbed-manager] 2025-04-13 00:36:59.610181 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:36:59.612233 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:36:59.612980 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:36:59.613720 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:36:59.614548 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:36:59.615068 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:36:59.615900 | orchestrator | 2025-04-13 00:36:59.616480 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-04-13 00:36:59.617100 | orchestrator | Sunday 13 April 2025 00:36:59 +0000 (0:00:01.682) 0:00:10.592 ********** 2025-04-13 00:37:00.076634 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-13 00:37:00.150622 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-13 00:37:00.590195 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-13 00:37:00.590989 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-13 00:37:00.592232 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-13 00:37:00.593416 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-13 00:37:00.594165 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-13 00:37:00.595460 | orchestrator | 2025-04-13 00:37:00.596049 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-04-13 00:37:00.597036 | orchestrator | Sunday 13 April 2025 00:37:00 +0000 (0:00:00.997) 0:00:11.590 ********** 2025-04-13 00:37:01.052297 | orchestrator | ok: [testbed-manager] 2025-04-13 00:37:01.152746 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:37:01.749931 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:37:01.750281 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:37:01.751715 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:37:01.755535 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:37:01.918097 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:37:01.918215 | orchestrator | 2025-04-13 00:37:01.918234 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-04-13 00:37:01.918249 | orchestrator | Sunday 13 April 2025 00:37:01 +0000 (0:00:01.155) 0:00:12.745 ********** 2025-04-13 00:37:01.918280 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:37:02.008146 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:37:02.094626 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:37:02.177538 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:37:02.256734 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:37:02.554741 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:37:02.555397 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:37:02.556080 | orchestrator | 2025-04-13 00:37:02.557422 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-04-13 00:37:02.557638 | orchestrator | Sunday 13 April 2025 00:37:02 +0000 (0:00:00.805) 0:00:13.550 ********** 2025-04-13 00:37:04.458268 | orchestrator | ok: [testbed-manager] 2025-04-13 00:37:04.458448 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:37:04.458626 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:37:04.462444 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:37:04.464244 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:37:04.464310 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:37:04.464330 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:37:04.464366 | orchestrator | 2025-04-13 00:37:04.464596 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-04-13 00:37:04.465270 | orchestrator | Sunday 13 April 2025 00:37:04 +0000 (0:00:01.901) 0:00:15.452 ********** 2025-04-13 00:37:05.419094 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-04-13 00:37:06.535484 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-13 00:37:06.535687 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-13 00:37:06.536109 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-13 00:37:06.536778 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-13 00:37:06.537258 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-13 00:37:06.537645 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-13 00:37:06.537950 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-13 00:37:06.544278 | orchestrator | 2025-04-13 00:37:08.049019 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-04-13 00:37:08.049145 | orchestrator | Sunday 13 April 2025 00:37:06 +0000 (0:00:02.077) 0:00:17.530 ********** 2025-04-13 00:37:08.049183 | orchestrator | ok: [testbed-manager] 2025-04-13 00:37:08.053934 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:37:08.054090 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:37:08.054142 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:37:08.054158 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:37:08.054172 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:37:08.054192 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:37:08.054676 | orchestrator | 2025-04-13 00:37:08.055064 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-04-13 00:37:08.055949 | orchestrator | Sunday 13 April 2025 00:37:08 +0000 (0:00:01.516) 0:00:19.046 ********** 2025-04-13 00:37:09.475412 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:37:09.478579 | orchestrator | 2025-04-13 00:37:09.478750 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-04-13 00:37:10.035349 | orchestrator | Sunday 13 April 2025 00:37:09 +0000 (0:00:01.423) 0:00:20.469 ********** 2025-04-13 00:37:10.035500 | orchestrator | ok: [testbed-manager] 2025-04-13 00:37:10.445342 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:37:10.446503 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:37:10.448349 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:37:10.449981 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:37:10.450792 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:37:10.451588 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:37:10.452482 | orchestrator | 2025-04-13 00:37:10.453317 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-04-13 00:37:10.453913 | orchestrator | Sunday 13 April 2025 00:37:10 +0000 (0:00:00.974) 0:00:21.443 ********** 2025-04-13 00:37:10.599209 | orchestrator | ok: [testbed-manager] 2025-04-13 00:37:10.679132 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:37:10.930430 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:37:11.016785 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:37:11.103915 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:37:11.252431 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:37:11.252685 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:37:11.253518 | orchestrator | 2025-04-13 00:37:11.254120 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-04-13 00:37:11.260277 | orchestrator | Sunday 13 April 2025 00:37:11 +0000 (0:00:00.803) 0:00:22.247 ********** 2025-04-13 00:37:11.615855 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-13 00:37:11.616015 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-04-13 00:37:11.703926 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-13 00:37:11.796594 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-04-13 00:37:11.797402 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-13 00:37:11.797457 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-04-13 00:37:12.282556 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-13 00:37:12.283053 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-04-13 00:37:12.283099 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-13 00:37:12.283482 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-04-13 00:37:12.283920 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-13 00:37:12.285863 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-04-13 00:37:12.287332 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-13 00:37:12.291119 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-04-13 00:37:12.292083 | orchestrator | 2025-04-13 00:37:12.293697 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-04-13 00:37:12.294371 | orchestrator | Sunday 13 April 2025 00:37:12 +0000 (0:00:01.032) 0:00:23.280 ********** 2025-04-13 00:37:12.624413 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:37:12.710966 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:37:12.799217 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:37:12.899508 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:37:12.997734 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:37:14.186885 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:37:14.187940 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:37:14.188771 | orchestrator | 2025-04-13 00:37:14.190103 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-04-13 00:37:14.190886 | orchestrator | Sunday 13 April 2025 00:37:14 +0000 (0:00:01.900) 0:00:25.180 ********** 2025-04-13 00:37:14.393250 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:37:14.480992 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:37:14.767327 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:37:14.848869 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:37:14.930560 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:37:14.971394 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:37:14.972315 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:37:14.973191 | orchestrator | 2025-04-13 00:37:14.974455 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:37:14.975513 | orchestrator | 2025-04-13 00:37:14 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-13 00:37:14.976767 | orchestrator | 2025-04-13 00:37:14 | INFO  | Please wait and do not abort execution. 2025-04-13 00:37:14.976929 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-13 00:37:14.977920 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-13 00:37:14.979212 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-13 00:37:14.979563 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-13 00:37:14.980449 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-13 00:37:14.981395 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-13 00:37:14.981854 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-13 00:37:14.982616 | orchestrator | 2025-04-13 00:37:14.983091 | orchestrator | Sunday 13 April 2025 00:37:14 +0000 (0:00:00.789) 0:00:25.970 ********** 2025-04-13 00:37:14.983317 | orchestrator | =============================================================================== 2025-04-13 00:37:14.984191 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 2.08s 2025-04-13 00:37:14.984769 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.99s 2025-04-13 00:37:14.985318 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.90s 2025-04-13 00:37:14.985673 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 1.90s 2025-04-13 00:37:14.986302 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.84s 2025-04-13 00:37:14.987247 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.72s 2025-04-13 00:37:14.987884 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.68s 2025-04-13 00:37:14.988145 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.52s 2025-04-13 00:37:14.988677 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.42s 2025-04-13 00:37:14.988861 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.27s 2025-04-13 00:37:14.989280 | orchestrator | osism.commons.network : Create required directories --------------------- 1.16s 2025-04-13 00:37:14.989624 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.16s 2025-04-13 00:37:14.989985 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.03s 2025-04-13 00:37:14.990381 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.00s 2025-04-13 00:37:14.990721 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.97s 2025-04-13 00:37:14.991168 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.81s 2025-04-13 00:37:14.992587 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.80s 2025-04-13 00:37:14.992964 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.79s 2025-04-13 00:37:14.993139 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.71s 2025-04-13 00:37:15.522109 | orchestrator | + osism apply wireguard 2025-04-13 00:37:16.965763 | orchestrator | 2025-04-13 00:37:16 | INFO  | Task d55934ef-1fe7-4222-89a1-1468219c4d53 (wireguard) was prepared for execution. 2025-04-13 00:37:20.154803 | orchestrator | 2025-04-13 00:37:16 | INFO  | It takes a moment until task d55934ef-1fe7-4222-89a1-1468219c4d53 (wireguard) has been started and output is visible here. 2025-04-13 00:37:20.155032 | orchestrator | 2025-04-13 00:37:20.155120 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-04-13 00:37:20.156147 | orchestrator | 2025-04-13 00:37:20.158169 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-04-13 00:37:20.158740 | orchestrator | Sunday 13 April 2025 00:37:20 +0000 (0:00:00.164) 0:00:00.164 ********** 2025-04-13 00:37:21.691772 | orchestrator | ok: [testbed-manager] 2025-04-13 00:37:28.210799 | orchestrator | 2025-04-13 00:37:28.210991 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-04-13 00:37:28.211014 | orchestrator | Sunday 13 April 2025 00:37:21 +0000 (0:00:01.538) 0:00:01.703 ********** 2025-04-13 00:37:28.211046 | orchestrator | changed: [testbed-manager] 2025-04-13 00:37:28.212675 | orchestrator | 2025-04-13 00:37:28.212752 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-04-13 00:37:28.213574 | orchestrator | Sunday 13 April 2025 00:37:28 +0000 (0:00:06.521) 0:00:08.225 ********** 2025-04-13 00:37:28.766451 | orchestrator | changed: [testbed-manager] 2025-04-13 00:37:28.767185 | orchestrator | 2025-04-13 00:37:28.769341 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-04-13 00:37:29.222956 | orchestrator | Sunday 13 April 2025 00:37:28 +0000 (0:00:00.556) 0:00:08.781 ********** 2025-04-13 00:37:29.223122 | orchestrator | changed: [testbed-manager] 2025-04-13 00:37:29.223356 | orchestrator | 2025-04-13 00:37:29.224636 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-04-13 00:37:29.224966 | orchestrator | Sunday 13 April 2025 00:37:29 +0000 (0:00:00.455) 0:00:09.237 ********** 2025-04-13 00:37:29.746060 | orchestrator | ok: [testbed-manager] 2025-04-13 00:37:29.747117 | orchestrator | 2025-04-13 00:37:29.747941 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-04-13 00:37:29.748297 | orchestrator | Sunday 13 April 2025 00:37:29 +0000 (0:00:00.522) 0:00:09.760 ********** 2025-04-13 00:37:30.301438 | orchestrator | ok: [testbed-manager] 2025-04-13 00:37:30.302807 | orchestrator | 2025-04-13 00:37:30.303617 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-04-13 00:37:30.303653 | orchestrator | Sunday 13 April 2025 00:37:30 +0000 (0:00:00.555) 0:00:10.315 ********** 2025-04-13 00:37:30.718470 | orchestrator | ok: [testbed-manager] 2025-04-13 00:37:30.718671 | orchestrator | 2025-04-13 00:37:30.719218 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-04-13 00:37:30.719650 | orchestrator | Sunday 13 April 2025 00:37:30 +0000 (0:00:00.418) 0:00:10.733 ********** 2025-04-13 00:37:31.929085 | orchestrator | changed: [testbed-manager] 2025-04-13 00:37:31.929239 | orchestrator | 2025-04-13 00:37:31.930606 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-04-13 00:37:31.931305 | orchestrator | Sunday 13 April 2025 00:37:31 +0000 (0:00:01.208) 0:00:11.942 ********** 2025-04-13 00:37:32.928597 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-13 00:37:32.929101 | orchestrator | changed: [testbed-manager] 2025-04-13 00:37:32.929718 | orchestrator | 2025-04-13 00:37:32.931309 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-04-13 00:37:34.694385 | orchestrator | Sunday 13 April 2025 00:37:32 +0000 (0:00:00.998) 0:00:12.940 ********** 2025-04-13 00:37:34.694533 | orchestrator | changed: [testbed-manager] 2025-04-13 00:37:34.695473 | orchestrator | 2025-04-13 00:37:34.697093 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-04-13 00:37:34.698889 | orchestrator | Sunday 13 April 2025 00:37:34 +0000 (0:00:01.766) 0:00:14.707 ********** 2025-04-13 00:37:35.638970 | orchestrator | changed: [testbed-manager] 2025-04-13 00:37:35.639180 | orchestrator | 2025-04-13 00:37:35.640136 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:37:35.640921 | orchestrator | 2025-04-13 00:37:35 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-13 00:37:35.642131 | orchestrator | 2025-04-13 00:37:35 | INFO  | Please wait and do not abort execution. 2025-04-13 00:37:35.642162 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:37:35.642506 | orchestrator | 2025-04-13 00:37:35.643179 | orchestrator | Sunday 13 April 2025 00:37:35 +0000 (0:00:00.946) 0:00:15.654 ********** 2025-04-13 00:37:35.643687 | orchestrator | =============================================================================== 2025-04-13 00:37:35.643985 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.52s 2025-04-13 00:37:35.644908 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.77s 2025-04-13 00:37:35.645998 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.54s 2025-04-13 00:37:35.646275 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.21s 2025-04-13 00:37:35.646932 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.00s 2025-04-13 00:37:35.647541 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.95s 2025-04-13 00:37:35.649328 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2025-04-13 00:37:35.649466 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.56s 2025-04-13 00:37:35.650481 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.52s 2025-04-13 00:37:35.651411 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.46s 2025-04-13 00:37:35.651892 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2025-04-13 00:37:36.240364 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-04-13 00:37:36.283723 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-04-13 00:37:36.368604 | orchestrator | Dload Upload Total Spent Left Speed 2025-04-13 00:37:36.368760 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 164 0 --:--:-- --:--:-- --:--:-- 164 2025-04-13 00:37:36.380429 | orchestrator | + osism apply --environment custom workarounds 2025-04-13 00:37:37.845424 | orchestrator | 2025-04-13 00:37:37 | INFO  | Trying to run play workarounds in environment custom 2025-04-13 00:37:37.892237 | orchestrator | 2025-04-13 00:37:37 | INFO  | Task 1095454e-bbb7-4189-9340-860b2d1ac805 (workarounds) was prepared for execution. 2025-04-13 00:37:41.047167 | orchestrator | 2025-04-13 00:37:37 | INFO  | It takes a moment until task 1095454e-bbb7-4189-9340-860b2d1ac805 (workarounds) has been started and output is visible here. 2025-04-13 00:37:41.047349 | orchestrator | 2025-04-13 00:37:41.048104 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 00:37:41.048168 | orchestrator | 2025-04-13 00:37:41.048794 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-04-13 00:37:41.050937 | orchestrator | Sunday 13 April 2025 00:37:41 +0000 (0:00:00.159) 0:00:00.159 ********** 2025-04-13 00:37:41.216819 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-04-13 00:37:41.300991 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-04-13 00:37:41.385246 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-04-13 00:37:41.469552 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-04-13 00:37:41.554716 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-04-13 00:37:41.844281 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-04-13 00:37:41.846137 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-04-13 00:37:41.847047 | orchestrator | 2025-04-13 00:37:41.847964 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-04-13 00:37:41.848847 | orchestrator | 2025-04-13 00:37:41.849748 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-04-13 00:37:41.850488 | orchestrator | Sunday 13 April 2025 00:37:41 +0000 (0:00:00.798) 0:00:00.958 ********** 2025-04-13 00:37:44.569243 | orchestrator | ok: [testbed-manager] 2025-04-13 00:37:44.573780 | orchestrator | 2025-04-13 00:37:44.574504 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-04-13 00:37:44.575924 | orchestrator | 2025-04-13 00:37:44.578008 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-04-13 00:37:44.578710 | orchestrator | Sunday 13 April 2025 00:37:44 +0000 (0:00:02.719) 0:00:03.677 ********** 2025-04-13 00:37:46.435387 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:37:46.436111 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:37:46.436237 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:37:46.436936 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:37:46.437899 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:37:46.438537 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:37:46.438970 | orchestrator | 2025-04-13 00:37:46.439941 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-04-13 00:37:46.440294 | orchestrator | 2025-04-13 00:37:46.441251 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-04-13 00:37:46.443505 | orchestrator | Sunday 13 April 2025 00:37:46 +0000 (0:00:01.867) 0:00:05.544 ********** 2025-04-13 00:37:47.924069 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-13 00:37:47.924797 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-13 00:37:47.925118 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-13 00:37:47.926430 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-13 00:37:47.927762 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-13 00:37:47.928366 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-13 00:37:47.930216 | orchestrator | 2025-04-13 00:37:47.930949 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-04-13 00:37:47.931814 | orchestrator | Sunday 13 April 2025 00:37:47 +0000 (0:00:01.490) 0:00:07.035 ********** 2025-04-13 00:37:51.623221 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:37:51.626075 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:37:51.626121 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:37:51.626971 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:37:51.627295 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:37:51.628958 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:37:51.630146 | orchestrator | 2025-04-13 00:37:51.630322 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-04-13 00:37:51.631031 | orchestrator | Sunday 13 April 2025 00:37:51 +0000 (0:00:03.701) 0:00:10.737 ********** 2025-04-13 00:37:51.795045 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:37:51.877564 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:37:51.955022 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:37:52.200005 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:37:52.342655 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:37:52.344206 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:37:52.344921 | orchestrator | 2025-04-13 00:37:52.345873 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-04-13 00:37:52.349359 | orchestrator | 2025-04-13 00:37:53.955950 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-04-13 00:37:53.956173 | orchestrator | Sunday 13 April 2025 00:37:52 +0000 (0:00:00.717) 0:00:11.454 ********** 2025-04-13 00:37:53.956218 | orchestrator | changed: [testbed-manager] 2025-04-13 00:37:53.956308 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:37:53.958612 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:37:53.960013 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:37:53.960933 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:37:53.962136 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:37:53.962948 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:37:53.964034 | orchestrator | 2025-04-13 00:37:53.964984 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-04-13 00:37:53.965587 | orchestrator | Sunday 13 April 2025 00:37:53 +0000 (0:00:01.611) 0:00:13.066 ********** 2025-04-13 00:37:55.600909 | orchestrator | changed: [testbed-manager] 2025-04-13 00:37:55.601144 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:37:55.601899 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:37:55.602172 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:37:55.602712 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:37:55.603449 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:37:55.603676 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:37:55.604032 | orchestrator | 2025-04-13 00:37:55.604539 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-04-13 00:37:55.605096 | orchestrator | Sunday 13 April 2025 00:37:55 +0000 (0:00:01.640) 0:00:14.707 ********** 2025-04-13 00:37:57.070552 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:37:57.070771 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:37:57.072092 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:37:57.073635 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:37:57.075035 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:37:57.075678 | orchestrator | ok: [testbed-manager] 2025-04-13 00:37:57.076859 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:37:57.077702 | orchestrator | 2025-04-13 00:37:57.078458 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-04-13 00:37:57.079088 | orchestrator | Sunday 13 April 2025 00:37:57 +0000 (0:00:01.477) 0:00:16.184 ********** 2025-04-13 00:37:58.882282 | orchestrator | changed: [testbed-manager] 2025-04-13 00:37:58.882468 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:37:58.884138 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:37:58.885397 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:37:58.886102 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:37:58.887425 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:37:58.888083 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:37:58.888808 | orchestrator | 2025-04-13 00:37:58.889807 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-04-13 00:37:58.890234 | orchestrator | Sunday 13 April 2025 00:37:58 +0000 (0:00:01.811) 0:00:17.996 ********** 2025-04-13 00:37:59.066330 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:37:59.142350 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:37:59.222367 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:37:59.307142 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:37:59.556553 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:37:59.703589 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:37:59.703747 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:37:59.704914 | orchestrator | 2025-04-13 00:37:59.705807 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-04-13 00:37:59.706455 | orchestrator | 2025-04-13 00:37:59.708493 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-04-13 00:37:59.709421 | orchestrator | Sunday 13 April 2025 00:37:59 +0000 (0:00:00.820) 0:00:18.816 ********** 2025-04-13 00:38:02.048020 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:38:02.051013 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:38:02.051082 | orchestrator | ok: [testbed-manager] 2025-04-13 00:38:02.051110 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:38:02.052972 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:38:02.053639 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:38:02.054123 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:38:02.055379 | orchestrator | 2025-04-13 00:38:02.055689 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:38:02.056484 | orchestrator | 2025-04-13 00:38:02 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-13 00:38:02.056727 | orchestrator | 2025-04-13 00:38:02 | INFO  | Please wait and do not abort execution. 2025-04-13 00:38:02.057722 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-13 00:38:02.058057 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:38:02.058693 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:38:02.059132 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:38:02.059497 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:38:02.059928 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:38:02.060462 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:38:02.060985 | orchestrator | 2025-04-13 00:38:02.061208 | orchestrator | Sunday 13 April 2025 00:38:02 +0000 (0:00:02.342) 0:00:21.159 ********** 2025-04-13 00:38:02.061620 | orchestrator | =============================================================================== 2025-04-13 00:38:02.062099 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.70s 2025-04-13 00:38:02.062554 | orchestrator | Apply netplan configuration --------------------------------------------- 2.72s 2025-04-13 00:38:02.063014 | orchestrator | Install python3-docker -------------------------------------------------- 2.34s 2025-04-13 00:38:02.063282 | orchestrator | Apply netplan configuration --------------------------------------------- 1.87s 2025-04-13 00:38:02.063803 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.81s 2025-04-13 00:38:02.064007 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.64s 2025-04-13 00:38:02.064518 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.61s 2025-04-13 00:38:02.064787 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.49s 2025-04-13 00:38:02.065216 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.48s 2025-04-13 00:38:02.066093 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.82s 2025-04-13 00:38:02.066200 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.80s 2025-04-13 00:38:02.066493 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.72s 2025-04-13 00:38:02.611360 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-04-13 00:38:04.088690 | orchestrator | 2025-04-13 00:38:04 | INFO  | Task bc6184ac-c8f6-49de-ac60-17f3bb88e4ec (reboot) was prepared for execution. 2025-04-13 00:38:07.221895 | orchestrator | 2025-04-13 00:38:04 | INFO  | It takes a moment until task bc6184ac-c8f6-49de-ac60-17f3bb88e4ec (reboot) has been started and output is visible here. 2025-04-13 00:38:07.222011 | orchestrator | 2025-04-13 00:38:07.222729 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-13 00:38:07.225232 | orchestrator | 2025-04-13 00:38:07.226363 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-13 00:38:07.226735 | orchestrator | Sunday 13 April 2025 00:38:07 +0000 (0:00:00.147) 0:00:00.147 ********** 2025-04-13 00:38:07.320216 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:38:07.320633 | orchestrator | 2025-04-13 00:38:07.322811 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-13 00:38:08.219768 | orchestrator | Sunday 13 April 2025 00:38:07 +0000 (0:00:00.100) 0:00:00.248 ********** 2025-04-13 00:38:08.220032 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:38:08.223948 | orchestrator | 2025-04-13 00:38:08.226102 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-13 00:38:08.226186 | orchestrator | Sunday 13 April 2025 00:38:08 +0000 (0:00:00.897) 0:00:01.146 ********** 2025-04-13 00:38:08.327719 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:38:08.328130 | orchestrator | 2025-04-13 00:38:08.330522 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-13 00:38:08.330871 | orchestrator | 2025-04-13 00:38:08.330901 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-13 00:38:08.330922 | orchestrator | Sunday 13 April 2025 00:38:08 +0000 (0:00:00.106) 0:00:01.253 ********** 2025-04-13 00:38:08.420419 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:38:08.421248 | orchestrator | 2025-04-13 00:38:08.421954 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-13 00:38:08.422451 | orchestrator | Sunday 13 April 2025 00:38:08 +0000 (0:00:00.095) 0:00:01.349 ********** 2025-04-13 00:38:09.055298 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:38:09.056061 | orchestrator | 2025-04-13 00:38:09.056120 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-13 00:38:09.057563 | orchestrator | Sunday 13 April 2025 00:38:09 +0000 (0:00:00.634) 0:00:01.984 ********** 2025-04-13 00:38:09.164046 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:38:09.164181 | orchestrator | 2025-04-13 00:38:09.165733 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-13 00:38:09.166663 | orchestrator | 2025-04-13 00:38:09.167755 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-13 00:38:09.168564 | orchestrator | Sunday 13 April 2025 00:38:09 +0000 (0:00:00.107) 0:00:02.091 ********** 2025-04-13 00:38:09.280799 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:38:09.282351 | orchestrator | 2025-04-13 00:38:09.283713 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-13 00:38:09.283752 | orchestrator | Sunday 13 April 2025 00:38:09 +0000 (0:00:00.116) 0:00:02.207 ********** 2025-04-13 00:38:10.025545 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:38:10.026795 | orchestrator | 2025-04-13 00:38:10.028636 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-13 00:38:10.145880 | orchestrator | Sunday 13 April 2025 00:38:10 +0000 (0:00:00.746) 0:00:02.954 ********** 2025-04-13 00:38:10.146103 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:38:10.147100 | orchestrator | 2025-04-13 00:38:10.148041 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-13 00:38:10.149035 | orchestrator | 2025-04-13 00:38:10.149538 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-13 00:38:10.150106 | orchestrator | Sunday 13 April 2025 00:38:10 +0000 (0:00:00.117) 0:00:03.072 ********** 2025-04-13 00:38:10.260751 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:38:10.261297 | orchestrator | 2025-04-13 00:38:10.261548 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-13 00:38:10.262450 | orchestrator | Sunday 13 April 2025 00:38:10 +0000 (0:00:00.116) 0:00:03.188 ********** 2025-04-13 00:38:10.894761 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:38:10.894950 | orchestrator | 2025-04-13 00:38:10.895374 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-13 00:38:10.896002 | orchestrator | Sunday 13 April 2025 00:38:10 +0000 (0:00:00.635) 0:00:03.824 ********** 2025-04-13 00:38:11.004719 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:38:11.004931 | orchestrator | 2025-04-13 00:38:11.006146 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-13 00:38:11.006571 | orchestrator | 2025-04-13 00:38:11.006614 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-13 00:38:11.008185 | orchestrator | Sunday 13 April 2025 00:38:10 +0000 (0:00:00.107) 0:00:03.931 ********** 2025-04-13 00:38:11.110151 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:38:11.793204 | orchestrator | 2025-04-13 00:38:11.793292 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-13 00:38:11.793301 | orchestrator | Sunday 13 April 2025 00:38:11 +0000 (0:00:00.106) 0:00:04.038 ********** 2025-04-13 00:38:11.793318 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:38:11.793645 | orchestrator | 2025-04-13 00:38:11.794077 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-13 00:38:11.794706 | orchestrator | Sunday 13 April 2025 00:38:11 +0000 (0:00:00.683) 0:00:04.721 ********** 2025-04-13 00:38:11.918197 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:38:11.919522 | orchestrator | 2025-04-13 00:38:11.919685 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-13 00:38:11.921070 | orchestrator | 2025-04-13 00:38:11.921594 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-13 00:38:11.922132 | orchestrator | Sunday 13 April 2025 00:38:11 +0000 (0:00:00.125) 0:00:04.847 ********** 2025-04-13 00:38:12.022619 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:38:12.025716 | orchestrator | 2025-04-13 00:38:12.700682 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-13 00:38:12.700824 | orchestrator | Sunday 13 April 2025 00:38:12 +0000 (0:00:00.104) 0:00:04.952 ********** 2025-04-13 00:38:12.701678 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:38:12.742808 | orchestrator | 2025-04-13 00:38:12.742945 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-13 00:38:12.742962 | orchestrator | Sunday 13 April 2025 00:38:12 +0000 (0:00:00.676) 0:00:05.629 ********** 2025-04-13 00:38:12.742991 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:38:12.743056 | orchestrator | 2025-04-13 00:38:12.743880 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:38:12.744474 | orchestrator | 2025-04-13 00:38:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-13 00:38:12.744780 | orchestrator | 2025-04-13 00:38:12 | INFO  | Please wait and do not abort execution. 2025-04-13 00:38:12.746150 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:38:12.747240 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:38:12.747715 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:38:12.748599 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:38:12.749033 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:38:12.749921 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:38:12.750781 | orchestrator | 2025-04-13 00:38:12.751126 | orchestrator | Sunday 13 April 2025 00:38:12 +0000 (0:00:00.042) 0:00:05.671 ********** 2025-04-13 00:38:12.751889 | orchestrator | =============================================================================== 2025-04-13 00:38:12.752636 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.28s 2025-04-13 00:38:12.753147 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.64s 2025-04-13 00:38:12.753907 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.61s 2025-04-13 00:38:13.297111 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-04-13 00:38:14.733634 | orchestrator | 2025-04-13 00:38:14 | INFO  | Task b5120cfc-e9d6-42c5-a05f-138d76e43845 (wait-for-connection) was prepared for execution. 2025-04-13 00:38:17.894296 | orchestrator | 2025-04-13 00:38:14 | INFO  | It takes a moment until task b5120cfc-e9d6-42c5-a05f-138d76e43845 (wait-for-connection) has been started and output is visible here. 2025-04-13 00:38:17.894456 | orchestrator | 2025-04-13 00:38:17.895357 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-04-13 00:38:17.898888 | orchestrator | 2025-04-13 00:38:17.900637 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-04-13 00:38:17.901178 | orchestrator | Sunday 13 April 2025 00:38:17 +0000 (0:00:00.206) 0:00:00.206 ********** 2025-04-13 00:38:30.537492 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:38:30.537723 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:38:30.537752 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:38:30.537769 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:38:30.537783 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:38:30.537797 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:38:30.537811 | orchestrator | 2025-04-13 00:38:30.537826 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:38:30.537905 | orchestrator | 2025-04-13 00:38:30 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-13 00:38:30.538531 | orchestrator | 2025-04-13 00:38:30 | INFO  | Please wait and do not abort execution. 2025-04-13 00:38:30.538594 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:38:30.539123 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:38:30.542748 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:38:30.543370 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:38:30.543478 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:38:30.543543 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:38:30.543628 | orchestrator | 2025-04-13 00:38:30.543660 | orchestrator | Sunday 13 April 2025 00:38:30 +0000 (0:00:12.642) 0:00:12.848 ********** 2025-04-13 00:38:30.543718 | orchestrator | =============================================================================== 2025-04-13 00:38:30.543835 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.64s 2025-04-13 00:38:31.048948 | orchestrator | + osism apply hddtemp 2025-04-13 00:38:32.507664 | orchestrator | 2025-04-13 00:38:32 | INFO  | Task cbda723f-941f-40a5-9cd4-91dba75a2a30 (hddtemp) was prepared for execution. 2025-04-13 00:38:35.724102 | orchestrator | 2025-04-13 00:38:32 | INFO  | It takes a moment until task cbda723f-941f-40a5-9cd4-91dba75a2a30 (hddtemp) has been started and output is visible here. 2025-04-13 00:38:35.724271 | orchestrator | 2025-04-13 00:38:35.726945 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-04-13 00:38:35.727002 | orchestrator | 2025-04-13 00:38:35.875102 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-04-13 00:38:35.875227 | orchestrator | Sunday 13 April 2025 00:38:35 +0000 (0:00:00.222) 0:00:00.222 ********** 2025-04-13 00:38:35.875262 | orchestrator | ok: [testbed-manager] 2025-04-13 00:38:35.953223 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:38:36.026619 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:38:36.102323 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:38:36.177213 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:38:36.434647 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:38:36.434818 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:38:36.434976 | orchestrator | 2025-04-13 00:38:36.435424 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-04-13 00:38:36.436105 | orchestrator | Sunday 13 April 2025 00:38:36 +0000 (0:00:00.712) 0:00:00.934 ********** 2025-04-13 00:38:37.624117 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:38:37.625965 | orchestrator | 2025-04-13 00:38:37.627090 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-04-13 00:38:37.627835 | orchestrator | Sunday 13 April 2025 00:38:37 +0000 (0:00:01.187) 0:00:02.121 ********** 2025-04-13 00:38:39.620261 | orchestrator | ok: [testbed-manager] 2025-04-13 00:38:39.621797 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:38:39.622667 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:38:39.622702 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:38:39.623160 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:38:39.624596 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:38:39.624927 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:38:39.625586 | orchestrator | 2025-04-13 00:38:39.626161 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-04-13 00:38:39.626567 | orchestrator | Sunday 13 April 2025 00:38:39 +0000 (0:00:01.998) 0:00:04.120 ********** 2025-04-13 00:38:40.182436 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:38:40.269689 | orchestrator | changed: [testbed-manager] 2025-04-13 00:38:40.814229 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:38:40.814537 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:38:40.815383 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:38:40.816229 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:38:40.819887 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:38:40.820535 | orchestrator | 2025-04-13 00:38:40.822266 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-04-13 00:38:42.087076 | orchestrator | Sunday 13 April 2025 00:38:40 +0000 (0:00:01.190) 0:00:05.310 ********** 2025-04-13 00:38:42.087222 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:38:42.087328 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:38:42.087721 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:38:42.088017 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:38:42.088047 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:38:42.088485 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:38:42.092071 | orchestrator | ok: [testbed-manager] 2025-04-13 00:38:42.092112 | orchestrator | 2025-04-13 00:38:42.092624 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-04-13 00:38:42.092652 | orchestrator | Sunday 13 April 2025 00:38:42 +0000 (0:00:01.273) 0:00:06.583 ********** 2025-04-13 00:38:42.387835 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:38:42.475425 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:38:42.559365 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:38:42.644246 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:38:42.772450 | orchestrator | changed: [testbed-manager] 2025-04-13 00:38:42.774932 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:38:42.776488 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:38:42.777782 | orchestrator | 2025-04-13 00:38:42.779096 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-04-13 00:38:42.779714 | orchestrator | Sunday 13 April 2025 00:38:42 +0000 (0:00:00.686) 0:00:07.270 ********** 2025-04-13 00:38:54.490952 | orchestrator | changed: [testbed-manager] 2025-04-13 00:38:54.492822 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:38:54.492880 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:38:54.492897 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:38:54.492912 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:38:54.492933 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:38:54.493356 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:38:54.496569 | orchestrator | 2025-04-13 00:38:54.497295 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-04-13 00:38:54.498749 | orchestrator | Sunday 13 April 2025 00:38:54 +0000 (0:00:11.712) 0:00:18.982 ********** 2025-04-13 00:38:55.702594 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:38:55.703145 | orchestrator | 2025-04-13 00:38:55.703194 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-04-13 00:38:55.707019 | orchestrator | Sunday 13 April 2025 00:38:55 +0000 (0:00:01.216) 0:00:20.199 ********** 2025-04-13 00:38:57.574266 | orchestrator | changed: [testbed-manager] 2025-04-13 00:38:57.576145 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:38:57.578489 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:38:57.580022 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:38:57.580912 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:38:57.581772 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:38:57.583228 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:38:57.584133 | orchestrator | 2025-04-13 00:38:57.585767 | orchestrator | 2025-04-13 00:38:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-13 00:38:57.586057 | orchestrator | 2025-04-13 00:38:57 | INFO  | Please wait and do not abort execution. 2025-04-13 00:38:57.586096 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:38:57.587120 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:38:57.587903 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-13 00:38:57.588500 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-13 00:38:57.589141 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-13 00:38:57.589932 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-13 00:38:57.590536 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-13 00:38:57.591007 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-13 00:38:57.591568 | orchestrator | 2025-04-13 00:38:57.592076 | orchestrator | Sunday 13 April 2025 00:38:57 +0000 (0:00:01.874) 0:00:22.074 ********** 2025-04-13 00:38:57.592799 | orchestrator | =============================================================================== 2025-04-13 00:38:57.593334 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 11.71s 2025-04-13 00:38:57.594124 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.00s 2025-04-13 00:38:57.594466 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.87s 2025-04-13 00:38:57.595178 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.27s 2025-04-13 00:38:57.595599 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.22s 2025-04-13 00:38:57.596224 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.19s 2025-04-13 00:38:57.596548 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.19s 2025-04-13 00:38:57.597121 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.71s 2025-04-13 00:38:57.599015 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.69s 2025-04-13 00:38:58.184308 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-04-13 00:38:59.514710 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-04-13 00:38:59.515662 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-04-13 00:38:59.515701 | orchestrator | + local max_attempts=60 2025-04-13 00:38:59.515718 | orchestrator | + local name=ceph-ansible 2025-04-13 00:38:59.515734 | orchestrator | + local attempt_num=1 2025-04-13 00:38:59.515757 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-04-13 00:38:59.549315 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-13 00:38:59.549460 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-04-13 00:38:59.549483 | orchestrator | + local max_attempts=60 2025-04-13 00:38:59.549499 | orchestrator | + local name=kolla-ansible 2025-04-13 00:38:59.549514 | orchestrator | + local attempt_num=1 2025-04-13 00:38:59.549532 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-04-13 00:38:59.573588 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-13 00:38:59.573737 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-04-13 00:38:59.573759 | orchestrator | + local max_attempts=60 2025-04-13 00:38:59.573771 | orchestrator | + local name=osism-ansible 2025-04-13 00:38:59.573782 | orchestrator | + local attempt_num=1 2025-04-13 00:38:59.573797 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-04-13 00:38:59.599454 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-13 00:38:59.770799 | orchestrator | + [[ true == \t\r\u\e ]] 2025-04-13 00:38:59.770933 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-04-13 00:38:59.770965 | orchestrator | ARA in ceph-ansible already disabled. 2025-04-13 00:38:59.912674 | orchestrator | ARA in kolla-ansible already disabled. 2025-04-13 00:39:00.064378 | orchestrator | ARA in osism-ansible already disabled. 2025-04-13 00:39:00.229314 | orchestrator | ARA in osism-kubernetes already disabled. 2025-04-13 00:39:00.229966 | orchestrator | + osism apply gather-facts 2025-04-13 00:39:01.684235 | orchestrator | 2025-04-13 00:39:01 | INFO  | Task fafcb94a-c3e8-4122-acff-a8012a37a2c4 (gather-facts) was prepared for execution. 2025-04-13 00:39:04.763519 | orchestrator | 2025-04-13 00:39:01 | INFO  | It takes a moment until task fafcb94a-c3e8-4122-acff-a8012a37a2c4 (gather-facts) has been started and output is visible here. 2025-04-13 00:39:04.764595 | orchestrator | 2025-04-13 00:39:04.764721 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-13 00:39:04.764747 | orchestrator | 2025-04-13 00:39:04.765877 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-13 00:39:04.766482 | orchestrator | Sunday 13 April 2025 00:39:04 +0000 (0:00:00.158) 0:00:00.158 ********** 2025-04-13 00:39:09.665947 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:39:09.666243 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:39:09.666281 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:39:09.667995 | orchestrator | ok: [testbed-manager] 2025-04-13 00:39:09.668564 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:39:09.669124 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:39:09.669792 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:39:09.674524 | orchestrator | 2025-04-13 00:39:09.675316 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-13 00:39:09.675547 | orchestrator | 2025-04-13 00:39:09.676031 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-13 00:39:09.676992 | orchestrator | Sunday 13 April 2025 00:39:09 +0000 (0:00:04.905) 0:00:05.063 ********** 2025-04-13 00:39:09.851028 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:39:09.926350 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:39:10.001664 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:39:10.081664 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:39:10.157577 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:39:10.196968 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:39:10.197258 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:39:10.197303 | orchestrator | 2025-04-13 00:39:10.197336 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:39:10.197671 | orchestrator | 2025-04-13 00:39:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-13 00:39:10.198098 | orchestrator | 2025-04-13 00:39:10 | INFO  | Please wait and do not abort execution. 2025-04-13 00:39:10.199002 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-13 00:39:10.199616 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-13 00:39:10.200179 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-13 00:39:10.201032 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-13 00:39:10.201709 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-13 00:39:10.202492 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-13 00:39:10.202673 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-13 00:39:10.203135 | orchestrator | 2025-04-13 00:39:10.203562 | orchestrator | Sunday 13 April 2025 00:39:10 +0000 (0:00:00.530) 0:00:05.594 ********** 2025-04-13 00:39:10.204040 | orchestrator | =============================================================================== 2025-04-13 00:39:10.204400 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.91s 2025-04-13 00:39:10.205035 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-04-13 00:39:10.780793 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-04-13 00:39:10.793993 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-04-13 00:39:10.806160 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-04-13 00:39:10.818477 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-04-13 00:39:10.831212 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-04-13 00:39:10.843285 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-04-13 00:39:10.855358 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-04-13 00:39:10.867999 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-04-13 00:39:10.887457 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-04-13 00:39:10.902472 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-04-13 00:39:10.920027 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-04-13 00:39:10.937666 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-04-13 00:39:10.956779 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-04-13 00:39:10.979112 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-04-13 00:39:10.999047 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-04-13 00:39:11.019330 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-04-13 00:39:11.038179 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-04-13 00:39:11.058319 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-04-13 00:39:11.077399 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-04-13 00:39:11.099301 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-04-13 00:39:11.120797 | orchestrator | + [[ false == \t\r\u\e ]] 2025-04-13 00:39:11.231068 | orchestrator | changed 2025-04-13 00:39:11.291635 | 2025-04-13 00:39:11.291777 | TASK [Deploy services] 2025-04-13 00:39:11.393996 | orchestrator | skipping: Conditional result was False 2025-04-13 00:39:11.416624 | 2025-04-13 00:39:11.416774 | TASK [Deploy in a nutshell] 2025-04-13 00:39:12.128835 | orchestrator | + set -e 2025-04-13 00:39:12.128989 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-13 00:39:12.129005 | orchestrator | ++ export INTERACTIVE=false 2025-04-13 00:39:12.129014 | orchestrator | ++ INTERACTIVE=false 2025-04-13 00:39:12.129038 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-13 00:39:12.129046 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-13 00:39:12.129054 | orchestrator | + source /opt/manager-vars.sh 2025-04-13 00:39:12.129065 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-13 00:39:12.129077 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-13 00:39:12.129084 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-13 00:39:12.129091 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-13 00:39:12.129097 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-13 00:39:12.129104 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-13 00:39:12.129111 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-04-13 00:39:12.129117 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-04-13 00:39:12.129124 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-13 00:39:12.129131 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-13 00:39:12.129138 | orchestrator | ++ export ARA=false 2025-04-13 00:39:12.129144 | orchestrator | ++ ARA=false 2025-04-13 00:39:12.129151 | orchestrator | ++ export TEMPEST=false 2025-04-13 00:39:12.129157 | orchestrator | ++ TEMPEST=false 2025-04-13 00:39:12.129164 | orchestrator | ++ export IS_ZUUL=true 2025-04-13 00:39:12.129170 | orchestrator | ++ IS_ZUUL=true 2025-04-13 00:39:12.129177 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.13 2025-04-13 00:39:12.129184 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.13 2025-04-13 00:39:12.129191 | orchestrator | ++ export EXTERNAL_API=false 2025-04-13 00:39:12.129197 | orchestrator | ++ EXTERNAL_API=false 2025-04-13 00:39:12.129204 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-13 00:39:12.129210 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-13 00:39:12.129224 | orchestrator | 2025-04-13 00:39:12.130192 | orchestrator | # PULL IMAGES 2025-04-13 00:39:12.130207 | orchestrator | 2025-04-13 00:39:12.130218 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-13 00:39:12.130226 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-13 00:39:12.130234 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-13 00:39:12.130241 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-13 00:39:12.130249 | orchestrator | + echo 2025-04-13 00:39:12.130256 | orchestrator | + echo '# PULL IMAGES' 2025-04-13 00:39:12.130263 | orchestrator | + echo 2025-04-13 00:39:12.130274 | orchestrator | ++ semver 8.1.0 7.0.0 2025-04-13 00:39:12.200689 | orchestrator | + [[ 1 -ge 0 ]] 2025-04-13 00:39:13.591206 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-04-13 00:39:13.591365 | orchestrator | 2025-04-13 00:39:13 | INFO  | Trying to run play pull-images in environment custom 2025-04-13 00:39:13.638768 | orchestrator | 2025-04-13 00:39:13 | INFO  | Task 71095e75-9979-4b40-8b6b-45d7eed70be6 (pull-images) was prepared for execution. 2025-04-13 00:39:16.704750 | orchestrator | 2025-04-13 00:39:13 | INFO  | It takes a moment until task 71095e75-9979-4b40-8b6b-45d7eed70be6 (pull-images) has been started and output is visible here. 2025-04-13 00:39:16.704884 | orchestrator | 2025-04-13 00:39:16.705683 | orchestrator | PLAY [Pull images] ************************************************************* 2025-04-13 00:39:16.705957 | orchestrator | 2025-04-13 00:39:16.706795 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-04-13 00:39:16.707687 | orchestrator | Sunday 13 April 2025 00:39:16 +0000 (0:00:00.140) 0:00:00.140 ********** 2025-04-13 00:39:52.907381 | orchestrator | changed: [testbed-manager] 2025-04-13 00:40:40.098632 | orchestrator | 2025-04-13 00:40:40.098759 | orchestrator | TASK [Pull other images] ******************************************************* 2025-04-13 00:40:40.098774 | orchestrator | Sunday 13 April 2025 00:39:52 +0000 (0:00:36.201) 0:00:36.341 ********** 2025-04-13 00:40:40.098798 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-04-13 00:40:40.101614 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-04-13 00:40:40.101636 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-04-13 00:40:40.102580 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-04-13 00:40:40.102617 | orchestrator | changed: [testbed-manager] => (item=common) 2025-04-13 00:40:40.102627 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-04-13 00:40:40.102636 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-04-13 00:40:40.102647 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-04-13 00:40:40.102679 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-04-13 00:40:40.102688 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-04-13 00:40:40.102700 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-04-13 00:40:40.102714 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-04-13 00:40:40.102841 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-04-13 00:40:40.103194 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-04-13 00:40:40.104004 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-04-13 00:40:40.104677 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-04-13 00:40:40.104976 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-04-13 00:40:40.105689 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-04-13 00:40:40.106220 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-04-13 00:40:40.106662 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-04-13 00:40:40.107306 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-04-13 00:40:40.107833 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-04-13 00:40:40.108711 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-04-13 00:40:40.109153 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-04-13 00:40:40.109783 | orchestrator | 2025-04-13 00:40:40.110665 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:40:40.111042 | orchestrator | 2025-04-13 00:40:40 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-13 00:40:40.111412 | orchestrator | 2025-04-13 00:40:40 | INFO  | Please wait and do not abort execution. 2025-04-13 00:40:40.111745 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:40:40.112253 | orchestrator | 2025-04-13 00:40:40.112595 | orchestrator | Sunday 13 April 2025 00:40:40 +0000 (0:00:47.195) 0:01:23.537 ********** 2025-04-13 00:40:40.112814 | orchestrator | =============================================================================== 2025-04-13 00:40:40.113508 | orchestrator | Pull other images ------------------------------------------------------ 47.20s 2025-04-13 00:40:40.113739 | orchestrator | Pull keystone image ---------------------------------------------------- 36.20s 2025-04-13 00:40:42.336933 | orchestrator | 2025-04-13 00:40:42 | INFO  | Trying to run play wipe-partitions in environment custom 2025-04-13 00:40:42.401737 | orchestrator | 2025-04-13 00:40:42 | INFO  | Task 2056df8e-decc-43cc-a8cb-c1159ae971c0 (wipe-partitions) was prepared for execution. 2025-04-13 00:40:45.609653 | orchestrator | 2025-04-13 00:40:42 | INFO  | It takes a moment until task 2056df8e-decc-43cc-a8cb-c1159ae971c0 (wipe-partitions) has been started and output is visible here. 2025-04-13 00:40:45.609807 | orchestrator | 2025-04-13 00:40:45.610708 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-04-13 00:40:45.610850 | orchestrator | 2025-04-13 00:40:45.611152 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-04-13 00:40:45.611364 | orchestrator | Sunday 13 April 2025 00:40:45 +0000 (0:00:00.130) 0:00:00.130 ********** 2025-04-13 00:40:46.272674 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:40:46.273171 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:40:46.273231 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:40:46.273594 | orchestrator | 2025-04-13 00:40:46.274129 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-04-13 00:40:46.278382 | orchestrator | Sunday 13 April 2025 00:40:46 +0000 (0:00:00.670) 0:00:00.800 ********** 2025-04-13 00:40:46.427247 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:40:46.517285 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:40:46.517861 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:40:46.518272 | orchestrator | 2025-04-13 00:40:46.518832 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-04-13 00:40:46.519553 | orchestrator | Sunday 13 April 2025 00:40:46 +0000 (0:00:00.247) 0:00:01.048 ********** 2025-04-13 00:40:47.258663 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:40:47.262236 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:40:47.263351 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:40:47.264534 | orchestrator | 2025-04-13 00:40:47.265857 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-04-13 00:40:47.267824 | orchestrator | Sunday 13 April 2025 00:40:47 +0000 (0:00:00.739) 0:00:01.787 ********** 2025-04-13 00:40:47.418780 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:40:47.517056 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:40:47.518268 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:40:47.521724 | orchestrator | 2025-04-13 00:40:48.651799 | orchestrator | TASK [Check device availability] *********************************************** 2025-04-13 00:40:48.651982 | orchestrator | Sunday 13 April 2025 00:40:47 +0000 (0:00:00.260) 0:00:02.047 ********** 2025-04-13 00:40:48.652043 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-04-13 00:40:48.652912 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-04-13 00:40:48.652950 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-04-13 00:40:48.653673 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-04-13 00:40:48.656575 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-04-13 00:40:48.657390 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-04-13 00:40:48.658638 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-04-13 00:40:48.658714 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-04-13 00:40:48.660467 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-04-13 00:40:48.660804 | orchestrator | 2025-04-13 00:40:48.663610 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-04-13 00:40:49.917182 | orchestrator | Sunday 13 April 2025 00:40:48 +0000 (0:00:01.131) 0:00:03.179 ********** 2025-04-13 00:40:49.917326 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-04-13 00:40:49.917800 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-04-13 00:40:49.919510 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-04-13 00:40:49.920594 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-04-13 00:40:49.924134 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-04-13 00:40:49.927374 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-04-13 00:40:49.927435 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-04-13 00:40:49.927449 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-04-13 00:40:49.927472 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-04-13 00:40:49.928141 | orchestrator | 2025-04-13 00:40:49.928791 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-04-13 00:40:49.929530 | orchestrator | Sunday 13 April 2025 00:40:49 +0000 (0:00:01.267) 0:00:04.446 ********** 2025-04-13 00:40:52.156996 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-04-13 00:40:52.157610 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-04-13 00:40:52.161591 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-04-13 00:40:52.161637 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-04-13 00:40:52.161674 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-04-13 00:40:52.161849 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-04-13 00:40:52.161887 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-04-13 00:40:52.161907 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-04-13 00:40:52.161917 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-04-13 00:40:52.161931 | orchestrator | 2025-04-13 00:40:52.162535 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-04-13 00:40:52.162927 | orchestrator | Sunday 13 April 2025 00:40:52 +0000 (0:00:02.237) 0:00:06.683 ********** 2025-04-13 00:40:52.781921 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:40:52.782820 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:40:52.784721 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:40:52.786434 | orchestrator | 2025-04-13 00:40:52.786517 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-04-13 00:40:53.396442 | orchestrator | Sunday 13 April 2025 00:40:52 +0000 (0:00:00.628) 0:00:07.312 ********** 2025-04-13 00:40:53.396585 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:40:53.399380 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:40:53.399455 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:40:53.402604 | orchestrator | 2025-04-13 00:40:53.406438 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:40:53.406857 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:40:53.406958 | orchestrator | 2025-04-13 00:40:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-13 00:40:53.406975 | orchestrator | 2025-04-13 00:40:53 | INFO  | Please wait and do not abort execution. 2025-04-13 00:40:53.406995 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:40:53.407248 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:40:53.407462 | orchestrator | 2025-04-13 00:40:53.407752 | orchestrator | Sunday 13 April 2025 00:40:53 +0000 (0:00:00.613) 0:00:07.926 ********** 2025-04-13 00:40:53.408010 | orchestrator | =============================================================================== 2025-04-13 00:40:53.408368 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.24s 2025-04-13 00:40:53.408648 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.27s 2025-04-13 00:40:53.408957 | orchestrator | Check device availability ----------------------------------------------- 1.13s 2025-04-13 00:40:53.409269 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.74s 2025-04-13 00:40:53.409634 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.67s 2025-04-13 00:40:53.409928 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2025-04-13 00:40:53.410253 | orchestrator | Request device events from the kernel ----------------------------------- 0.61s 2025-04-13 00:40:53.413638 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2025-04-13 00:40:53.413847 | orchestrator | Remove all rook related logical devices --------------------------------- 0.25s 2025-04-13 00:40:55.536776 | orchestrator | 2025-04-13 00:40:55 | INFO  | Task 6e21c842-12fc-4bcd-9612-2786b3c279f2 (facts) was prepared for execution. 2025-04-13 00:40:58.717458 | orchestrator | 2025-04-13 00:40:55 | INFO  | It takes a moment until task 6e21c842-12fc-4bcd-9612-2786b3c279f2 (facts) has been started and output is visible here. 2025-04-13 00:40:58.717629 | orchestrator | 2025-04-13 00:40:58.720652 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-04-13 00:40:58.720773 | orchestrator | 2025-04-13 00:40:58.721508 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-13 00:40:58.722226 | orchestrator | Sunday 13 April 2025 00:40:58 +0000 (0:00:00.211) 0:00:00.211 ********** 2025-04-13 00:40:59.231428 | orchestrator | ok: [testbed-manager] 2025-04-13 00:40:59.750427 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:40:59.752387 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:40:59.752889 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:40:59.752912 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:40:59.752925 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:40:59.753495 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:40:59.753736 | orchestrator | 2025-04-13 00:40:59.754829 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-13 00:40:59.755044 | orchestrator | Sunday 13 April 2025 00:40:59 +0000 (0:00:01.033) 0:00:01.244 ********** 2025-04-13 00:40:59.885361 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:40:59.956631 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:41:00.024270 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:41:00.091994 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:41:00.160179 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:00.757341 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:00.759372 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:00.759506 | orchestrator | 2025-04-13 00:41:00.759576 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-13 00:41:00.760717 | orchestrator | 2025-04-13 00:41:00.761299 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-13 00:41:00.761339 | orchestrator | Sunday 13 April 2025 00:41:00 +0000 (0:00:01.006) 0:00:02.250 ********** 2025-04-13 00:41:05.404707 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:41:05.406793 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:41:05.406863 | orchestrator | ok: [testbed-manager] 2025-04-13 00:41:05.412410 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:41:05.413944 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:41:05.415178 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:41:05.416387 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:41:05.417257 | orchestrator | 2025-04-13 00:41:05.419050 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-13 00:41:05.419182 | orchestrator | 2025-04-13 00:41:05.420614 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-13 00:41:05.421390 | orchestrator | Sunday 13 April 2025 00:41:05 +0000 (0:00:04.649) 0:00:06.899 ********** 2025-04-13 00:41:05.785657 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:41:05.874082 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:41:05.962199 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:41:06.038353 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:41:06.122517 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:06.164529 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:06.165496 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:06.165550 | orchestrator | 2025-04-13 00:41:06.167210 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:41:06.167301 | orchestrator | 2025-04-13 00:41:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-13 00:41:06.167995 | orchestrator | 2025-04-13 00:41:06 | INFO  | Please wait and do not abort execution. 2025-04-13 00:41:06.168105 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:41:06.170315 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:41:06.172597 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:41:06.173369 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:41:06.174418 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:41:06.174458 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:41:06.174667 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:41:06.175317 | orchestrator | 2025-04-13 00:41:06.176067 | orchestrator | Sunday 13 April 2025 00:41:06 +0000 (0:00:00.761) 0:00:07.661 ********** 2025-04-13 00:41:06.176595 | orchestrator | =============================================================================== 2025-04-13 00:41:06.176950 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.65s 2025-04-13 00:41:06.178558 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.03s 2025-04-13 00:41:08.457004 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.01s 2025-04-13 00:41:08.457122 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.76s 2025-04-13 00:41:08.457159 | orchestrator | 2025-04-13 00:41:08 | INFO  | Task 364ae3d0-13e9-4698-b06d-e062760e363b (ceph-configure-lvm-volumes) was prepared for execution. 2025-04-13 00:41:11.802738 | orchestrator | 2025-04-13 00:41:08 | INFO  | It takes a moment until task 364ae3d0-13e9-4698-b06d-e062760e363b (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-04-13 00:41:11.802923 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-13 00:41:12.373173 | orchestrator | 2025-04-13 00:41:12.373928 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-13 00:41:12.376996 | orchestrator | 2025-04-13 00:41:12.377279 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-13 00:41:12.378515 | orchestrator | Sunday 13 April 2025 00:41:12 +0000 (0:00:00.494) 0:00:00.494 ********** 2025-04-13 00:41:12.641794 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-13 00:41:12.642181 | orchestrator | 2025-04-13 00:41:12.642272 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-13 00:41:12.647174 | orchestrator | Sunday 13 April 2025 00:41:12 +0000 (0:00:00.270) 0:00:00.764 ********** 2025-04-13 00:41:12.885546 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:41:12.885766 | orchestrator | 2025-04-13 00:41:12.887350 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:12.891032 | orchestrator | Sunday 13 April 2025 00:41:12 +0000 (0:00:00.243) 0:00:01.007 ********** 2025-04-13 00:41:13.427467 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-04-13 00:41:13.428752 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-04-13 00:41:13.430124 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-04-13 00:41:13.430156 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-04-13 00:41:13.430176 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-04-13 00:41:13.431129 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-04-13 00:41:13.432595 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-04-13 00:41:13.433752 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-04-13 00:41:13.435080 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-04-13 00:41:13.436130 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-04-13 00:41:13.437067 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-04-13 00:41:13.437870 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-04-13 00:41:13.438600 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-04-13 00:41:13.441544 | orchestrator | 2025-04-13 00:41:13.641053 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:13.641177 | orchestrator | Sunday 13 April 2025 00:41:13 +0000 (0:00:00.538) 0:00:01.545 ********** 2025-04-13 00:41:13.641241 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:13.643192 | orchestrator | 2025-04-13 00:41:13.643962 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:13.645730 | orchestrator | Sunday 13 April 2025 00:41:13 +0000 (0:00:00.216) 0:00:01.762 ********** 2025-04-13 00:41:13.848437 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:13.855770 | orchestrator | 2025-04-13 00:41:14.057196 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:14.057335 | orchestrator | Sunday 13 April 2025 00:41:13 +0000 (0:00:00.208) 0:00:01.970 ********** 2025-04-13 00:41:14.057383 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:14.059080 | orchestrator | 2025-04-13 00:41:14.059122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:14.059960 | orchestrator | Sunday 13 April 2025 00:41:14 +0000 (0:00:00.206) 0:00:02.176 ********** 2025-04-13 00:41:14.257228 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:14.258229 | orchestrator | 2025-04-13 00:41:14.260706 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:14.261183 | orchestrator | Sunday 13 April 2025 00:41:14 +0000 (0:00:00.203) 0:00:02.380 ********** 2025-04-13 00:41:14.454552 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:14.454987 | orchestrator | 2025-04-13 00:41:14.455190 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:14.455221 | orchestrator | Sunday 13 April 2025 00:41:14 +0000 (0:00:00.196) 0:00:02.576 ********** 2025-04-13 00:41:14.656007 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:14.836249 | orchestrator | 2025-04-13 00:41:14.836368 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:14.836387 | orchestrator | Sunday 13 April 2025 00:41:14 +0000 (0:00:00.198) 0:00:02.775 ********** 2025-04-13 00:41:14.836419 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:14.837135 | orchestrator | 2025-04-13 00:41:14.839338 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:14.841854 | orchestrator | Sunday 13 April 2025 00:41:14 +0000 (0:00:00.183) 0:00:02.959 ********** 2025-04-13 00:41:15.021027 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:15.021431 | orchestrator | 2025-04-13 00:41:15.022529 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:15.023263 | orchestrator | Sunday 13 April 2025 00:41:15 +0000 (0:00:00.185) 0:00:03.144 ********** 2025-04-13 00:41:15.655219 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099) 2025-04-13 00:41:15.655376 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099) 2025-04-13 00:41:15.655419 | orchestrator | 2025-04-13 00:41:15.655593 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:15.656193 | orchestrator | Sunday 13 April 2025 00:41:15 +0000 (0:00:00.633) 0:00:03.777 ********** 2025-04-13 00:41:16.537440 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d62d4166-25a1-4741-94fc-59c78379b097) 2025-04-13 00:41:16.537614 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d62d4166-25a1-4741-94fc-59c78379b097) 2025-04-13 00:41:16.537637 | orchestrator | 2025-04-13 00:41:16.537938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:16.538067 | orchestrator | Sunday 13 April 2025 00:41:16 +0000 (0:00:00.883) 0:00:04.661 ********** 2025-04-13 00:41:16.990862 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_24d70fc8-7961-4caf-9f39-267d5072f1bc) 2025-04-13 00:41:16.991085 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_24d70fc8-7961-4caf-9f39-267d5072f1bc) 2025-04-13 00:41:16.991146 | orchestrator | 2025-04-13 00:41:16.991174 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:16.991237 | orchestrator | Sunday 13 April 2025 00:41:16 +0000 (0:00:00.451) 0:00:05.113 ********** 2025-04-13 00:41:17.424558 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bd3f4097-e1b2-4e0f-b572-2003c7cd8b15) 2025-04-13 00:41:17.425843 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bd3f4097-e1b2-4e0f-b572-2003c7cd8b15) 2025-04-13 00:41:17.425981 | orchestrator | 2025-04-13 00:41:17.426011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:17.794823 | orchestrator | Sunday 13 April 2025 00:41:17 +0000 (0:00:00.433) 0:00:05.546 ********** 2025-04-13 00:41:17.794947 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-13 00:41:17.796480 | orchestrator | 2025-04-13 00:41:18.212872 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:18.213022 | orchestrator | Sunday 13 April 2025 00:41:17 +0000 (0:00:00.370) 0:00:05.917 ********** 2025-04-13 00:41:18.213057 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-04-13 00:41:18.214309 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-04-13 00:41:18.214345 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-04-13 00:41:18.214553 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-04-13 00:41:18.214582 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-04-13 00:41:18.214818 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-04-13 00:41:18.215007 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-04-13 00:41:18.215240 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-04-13 00:41:18.215908 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-04-13 00:41:18.216293 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-04-13 00:41:18.216408 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-04-13 00:41:18.218481 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-04-13 00:41:18.221126 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-04-13 00:41:18.221219 | orchestrator | 2025-04-13 00:41:18.221321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:18.222688 | orchestrator | Sunday 13 April 2025 00:41:18 +0000 (0:00:00.418) 0:00:06.335 ********** 2025-04-13 00:41:18.416189 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:18.416372 | orchestrator | 2025-04-13 00:41:18.416409 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:18.416444 | orchestrator | Sunday 13 April 2025 00:41:18 +0000 (0:00:00.204) 0:00:06.540 ********** 2025-04-13 00:41:18.622800 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:18.624144 | orchestrator | 2025-04-13 00:41:18.624191 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:18.625030 | orchestrator | Sunday 13 April 2025 00:41:18 +0000 (0:00:00.206) 0:00:06.746 ********** 2025-04-13 00:41:18.824421 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:18.826819 | orchestrator | 2025-04-13 00:41:18.827107 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:19.030707 | orchestrator | Sunday 13 April 2025 00:41:18 +0000 (0:00:00.201) 0:00:06.948 ********** 2025-04-13 00:41:19.030960 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:19.031110 | orchestrator | 2025-04-13 00:41:19.456090 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:19.456216 | orchestrator | Sunday 13 April 2025 00:41:19 +0000 (0:00:00.204) 0:00:07.153 ********** 2025-04-13 00:41:19.456251 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:19.457615 | orchestrator | 2025-04-13 00:41:19.461078 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:19.708838 | orchestrator | Sunday 13 April 2025 00:41:19 +0000 (0:00:00.425) 0:00:07.578 ********** 2025-04-13 00:41:19.709015 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:19.709284 | orchestrator | 2025-04-13 00:41:19.709866 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:19.709925 | orchestrator | Sunday 13 April 2025 00:41:19 +0000 (0:00:00.253) 0:00:07.832 ********** 2025-04-13 00:41:19.910315 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:19.910576 | orchestrator | 2025-04-13 00:41:19.910904 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:19.911233 | orchestrator | Sunday 13 April 2025 00:41:19 +0000 (0:00:00.201) 0:00:08.034 ********** 2025-04-13 00:41:20.162484 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:20.163090 | orchestrator | 2025-04-13 00:41:20.163136 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:20.164009 | orchestrator | Sunday 13 April 2025 00:41:20 +0000 (0:00:00.250) 0:00:08.284 ********** 2025-04-13 00:41:20.861550 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-04-13 00:41:20.861683 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-04-13 00:41:20.861694 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-04-13 00:41:20.861703 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-04-13 00:41:20.863861 | orchestrator | 2025-04-13 00:41:20.863928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:20.863941 | orchestrator | Sunday 13 April 2025 00:41:20 +0000 (0:00:00.700) 0:00:08.985 ********** 2025-04-13 00:41:21.066246 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:21.070241 | orchestrator | 2025-04-13 00:41:21.070317 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:21.334772 | orchestrator | Sunday 13 April 2025 00:41:21 +0000 (0:00:00.204) 0:00:09.189 ********** 2025-04-13 00:41:21.335006 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:21.335293 | orchestrator | 2025-04-13 00:41:21.338475 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:21.563805 | orchestrator | Sunday 13 April 2025 00:41:21 +0000 (0:00:00.266) 0:00:09.456 ********** 2025-04-13 00:41:21.564028 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:21.565083 | orchestrator | 2025-04-13 00:41:21.565909 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:21.566597 | orchestrator | Sunday 13 April 2025 00:41:21 +0000 (0:00:00.229) 0:00:09.686 ********** 2025-04-13 00:41:21.781102 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:21.784436 | orchestrator | 2025-04-13 00:41:21.785156 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-13 00:41:21.786188 | orchestrator | Sunday 13 April 2025 00:41:21 +0000 (0:00:00.216) 0:00:09.902 ********** 2025-04-13 00:41:21.977409 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-04-13 00:41:21.978189 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-04-13 00:41:21.980078 | orchestrator | 2025-04-13 00:41:22.128261 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-13 00:41:22.128406 | orchestrator | Sunday 13 April 2025 00:41:21 +0000 (0:00:00.197) 0:00:10.100 ********** 2025-04-13 00:41:22.128442 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:22.128929 | orchestrator | 2025-04-13 00:41:22.129728 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-13 00:41:22.130184 | orchestrator | Sunday 13 April 2025 00:41:22 +0000 (0:00:00.148) 0:00:10.249 ********** 2025-04-13 00:41:22.527229 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:22.528661 | orchestrator | 2025-04-13 00:41:22.530606 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-13 00:41:22.532160 | orchestrator | Sunday 13 April 2025 00:41:22 +0000 (0:00:00.397) 0:00:10.646 ********** 2025-04-13 00:41:22.689006 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:22.690094 | orchestrator | 2025-04-13 00:41:22.691542 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-13 00:41:22.691776 | orchestrator | Sunday 13 April 2025 00:41:22 +0000 (0:00:00.164) 0:00:10.811 ********** 2025-04-13 00:41:22.824621 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:41:22.825428 | orchestrator | 2025-04-13 00:41:22.826448 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-13 00:41:22.828236 | orchestrator | Sunday 13 April 2025 00:41:22 +0000 (0:00:00.134) 0:00:10.946 ********** 2025-04-13 00:41:23.047443 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2045bad1-ab77-5a33-981a-e42fb4136085'}}) 2025-04-13 00:41:23.049331 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '075038e7-2b9c-5de1-9fc0-4ab80f908b26'}}) 2025-04-13 00:41:23.052377 | orchestrator | 2025-04-13 00:41:23.055692 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-13 00:41:23.056913 | orchestrator | Sunday 13 April 2025 00:41:23 +0000 (0:00:00.221) 0:00:11.167 ********** 2025-04-13 00:41:23.250629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2045bad1-ab77-5a33-981a-e42fb4136085'}})  2025-04-13 00:41:23.251226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '075038e7-2b9c-5de1-9fc0-4ab80f908b26'}})  2025-04-13 00:41:23.251267 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:23.252467 | orchestrator | 2025-04-13 00:41:23.255068 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-13 00:41:23.255558 | orchestrator | Sunday 13 April 2025 00:41:23 +0000 (0:00:00.202) 0:00:11.370 ********** 2025-04-13 00:41:23.417999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2045bad1-ab77-5a33-981a-e42fb4136085'}})  2025-04-13 00:41:23.419416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '075038e7-2b9c-5de1-9fc0-4ab80f908b26'}})  2025-04-13 00:41:23.420749 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:23.421959 | orchestrator | 2025-04-13 00:41:23.422827 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-13 00:41:23.423558 | orchestrator | Sunday 13 April 2025 00:41:23 +0000 (0:00:00.171) 0:00:11.541 ********** 2025-04-13 00:41:23.585239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2045bad1-ab77-5a33-981a-e42fb4136085'}})  2025-04-13 00:41:23.585636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '075038e7-2b9c-5de1-9fc0-4ab80f908b26'}})  2025-04-13 00:41:23.589497 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:23.591095 | orchestrator | 2025-04-13 00:41:23.593727 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-13 00:41:23.744771 | orchestrator | Sunday 13 April 2025 00:41:23 +0000 (0:00:00.166) 0:00:11.708 ********** 2025-04-13 00:41:23.744965 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:41:23.745402 | orchestrator | 2025-04-13 00:41:23.745439 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-13 00:41:23.745753 | orchestrator | Sunday 13 April 2025 00:41:23 +0000 (0:00:00.158) 0:00:11.867 ********** 2025-04-13 00:41:23.909662 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:41:24.039618 | orchestrator | 2025-04-13 00:41:24.039738 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-13 00:41:24.039754 | orchestrator | Sunday 13 April 2025 00:41:23 +0000 (0:00:00.161) 0:00:12.029 ********** 2025-04-13 00:41:24.039780 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:24.040013 | orchestrator | 2025-04-13 00:41:24.040341 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-13 00:41:24.040772 | orchestrator | Sunday 13 April 2025 00:41:24 +0000 (0:00:00.133) 0:00:12.162 ********** 2025-04-13 00:41:24.202947 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:24.209700 | orchestrator | 2025-04-13 00:41:24.214269 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-13 00:41:24.215572 | orchestrator | Sunday 13 April 2025 00:41:24 +0000 (0:00:00.161) 0:00:12.324 ********** 2025-04-13 00:41:24.351295 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:24.352671 | orchestrator | 2025-04-13 00:41:24.353912 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-13 00:41:24.355622 | orchestrator | Sunday 13 April 2025 00:41:24 +0000 (0:00:00.148) 0:00:12.472 ********** 2025-04-13 00:41:24.791072 | orchestrator | ok: [testbed-node-3] => { 2025-04-13 00:41:24.792695 | orchestrator |  "ceph_osd_devices": { 2025-04-13 00:41:24.797141 | orchestrator |  "sdb": { 2025-04-13 00:41:24.797264 | orchestrator |  "osd_lvm_uuid": "2045bad1-ab77-5a33-981a-e42fb4136085" 2025-04-13 00:41:24.797911 | orchestrator |  }, 2025-04-13 00:41:24.798688 | orchestrator |  "sdc": { 2025-04-13 00:41:24.798857 | orchestrator |  "osd_lvm_uuid": "075038e7-2b9c-5de1-9fc0-4ab80f908b26" 2025-04-13 00:41:24.800088 | orchestrator |  } 2025-04-13 00:41:24.800831 | orchestrator |  } 2025-04-13 00:41:24.802488 | orchestrator | } 2025-04-13 00:41:24.805007 | orchestrator | 2025-04-13 00:41:24.806619 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-13 00:41:24.811564 | orchestrator | Sunday 13 April 2025 00:41:24 +0000 (0:00:00.442) 0:00:12.914 ********** 2025-04-13 00:41:24.941055 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:24.941604 | orchestrator | 2025-04-13 00:41:24.942484 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-13 00:41:24.944593 | orchestrator | Sunday 13 April 2025 00:41:24 +0000 (0:00:00.150) 0:00:13.065 ********** 2025-04-13 00:41:25.092804 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:25.094307 | orchestrator | 2025-04-13 00:41:25.096904 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-13 00:41:25.097806 | orchestrator | Sunday 13 April 2025 00:41:25 +0000 (0:00:00.148) 0:00:13.213 ********** 2025-04-13 00:41:25.322302 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:41:25.327227 | orchestrator | 2025-04-13 00:41:25.329274 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-13 00:41:25.330057 | orchestrator | Sunday 13 April 2025 00:41:25 +0000 (0:00:00.231) 0:00:13.445 ********** 2025-04-13 00:41:25.617583 | orchestrator | changed: [testbed-node-3] => { 2025-04-13 00:41:25.617940 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-13 00:41:25.619575 | orchestrator |  "ceph_osd_devices": { 2025-04-13 00:41:25.620154 | orchestrator |  "sdb": { 2025-04-13 00:41:25.621128 | orchestrator |  "osd_lvm_uuid": "2045bad1-ab77-5a33-981a-e42fb4136085" 2025-04-13 00:41:25.622874 | orchestrator |  }, 2025-04-13 00:41:25.622994 | orchestrator |  "sdc": { 2025-04-13 00:41:25.623304 | orchestrator |  "osd_lvm_uuid": "075038e7-2b9c-5de1-9fc0-4ab80f908b26" 2025-04-13 00:41:25.623982 | orchestrator |  } 2025-04-13 00:41:25.624831 | orchestrator |  }, 2025-04-13 00:41:25.625125 | orchestrator |  "lvm_volumes": [ 2025-04-13 00:41:25.625492 | orchestrator |  { 2025-04-13 00:41:25.630726 | orchestrator |  "data": "osd-block-2045bad1-ab77-5a33-981a-e42fb4136085", 2025-04-13 00:41:25.633033 | orchestrator |  "data_vg": "ceph-2045bad1-ab77-5a33-981a-e42fb4136085" 2025-04-13 00:41:25.636952 | orchestrator |  }, 2025-04-13 00:41:25.638277 | orchestrator |  { 2025-04-13 00:41:25.638321 | orchestrator |  "data": "osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26", 2025-04-13 00:41:25.638346 | orchestrator |  "data_vg": "ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26" 2025-04-13 00:41:25.640193 | orchestrator |  } 2025-04-13 00:41:25.640985 | orchestrator |  ] 2025-04-13 00:41:25.642264 | orchestrator |  } 2025-04-13 00:41:25.642923 | orchestrator | } 2025-04-13 00:41:25.643690 | orchestrator | 2025-04-13 00:41:25.644188 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-13 00:41:25.645071 | orchestrator | Sunday 13 April 2025 00:41:25 +0000 (0:00:00.292) 0:00:13.737 ********** 2025-04-13 00:41:28.003164 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-13 00:41:28.003719 | orchestrator | 2025-04-13 00:41:28.003758 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-13 00:41:28.003774 | orchestrator | 2025-04-13 00:41:28.003795 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-13 00:41:28.004203 | orchestrator | Sunday 13 April 2025 00:41:27 +0000 (0:00:02.380) 0:00:16.118 ********** 2025-04-13 00:41:28.276141 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-13 00:41:28.277065 | orchestrator | 2025-04-13 00:41:28.277921 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-13 00:41:28.278859 | orchestrator | Sunday 13 April 2025 00:41:28 +0000 (0:00:00.278) 0:00:16.396 ********** 2025-04-13 00:41:28.531250 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:41:28.532557 | orchestrator | 2025-04-13 00:41:28.534236 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:28.535783 | orchestrator | Sunday 13 April 2025 00:41:28 +0000 (0:00:00.253) 0:00:16.650 ********** 2025-04-13 00:41:28.971290 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-04-13 00:41:28.972030 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-04-13 00:41:28.973077 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-04-13 00:41:28.975767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-04-13 00:41:28.976766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-04-13 00:41:28.976862 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-04-13 00:41:28.977284 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-04-13 00:41:28.977966 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-04-13 00:41:28.978081 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-04-13 00:41:28.979037 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-04-13 00:41:28.979204 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-04-13 00:41:28.979504 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-04-13 00:41:28.979925 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-04-13 00:41:28.980473 | orchestrator | 2025-04-13 00:41:28.981322 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:28.981872 | orchestrator | Sunday 13 April 2025 00:41:28 +0000 (0:00:00.444) 0:00:17.094 ********** 2025-04-13 00:41:29.180799 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:29.182382 | orchestrator | 2025-04-13 00:41:29.183103 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:29.390605 | orchestrator | Sunday 13 April 2025 00:41:29 +0000 (0:00:00.207) 0:00:17.302 ********** 2025-04-13 00:41:29.390750 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:29.392458 | orchestrator | 2025-04-13 00:41:29.615158 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:29.615306 | orchestrator | Sunday 13 April 2025 00:41:29 +0000 (0:00:00.211) 0:00:17.513 ********** 2025-04-13 00:41:29.615359 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:29.905616 | orchestrator | 2025-04-13 00:41:29.905738 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:29.905758 | orchestrator | Sunday 13 April 2025 00:41:29 +0000 (0:00:00.224) 0:00:17.737 ********** 2025-04-13 00:41:29.905791 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:29.906296 | orchestrator | 2025-04-13 00:41:29.906334 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:29.907700 | orchestrator | Sunday 13 April 2025 00:41:29 +0000 (0:00:00.291) 0:00:18.029 ********** 2025-04-13 00:41:30.091764 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:30.276586 | orchestrator | 2025-04-13 00:41:30.276734 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:30.276767 | orchestrator | Sunday 13 April 2025 00:41:30 +0000 (0:00:00.179) 0:00:18.209 ********** 2025-04-13 00:41:30.276800 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:30.278787 | orchestrator | 2025-04-13 00:41:30.282432 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:30.282549 | orchestrator | Sunday 13 April 2025 00:41:30 +0000 (0:00:00.188) 0:00:18.398 ********** 2025-04-13 00:41:30.467937 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:30.472069 | orchestrator | 2025-04-13 00:41:30.473123 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:30.474099 | orchestrator | Sunday 13 April 2025 00:41:30 +0000 (0:00:00.192) 0:00:18.590 ********** 2025-04-13 00:41:30.683133 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:30.683745 | orchestrator | 2025-04-13 00:41:30.686767 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:30.687751 | orchestrator | Sunday 13 April 2025 00:41:30 +0000 (0:00:00.214) 0:00:18.805 ********** 2025-04-13 00:41:31.061868 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7) 2025-04-13 00:41:31.062331 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7) 2025-04-13 00:41:31.063095 | orchestrator | 2025-04-13 00:41:31.064928 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:31.495253 | orchestrator | Sunday 13 April 2025 00:41:31 +0000 (0:00:00.381) 0:00:19.186 ********** 2025-04-13 00:41:31.495390 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a0e179ac-f513-4bce-8698-5c5d77bb97a6) 2025-04-13 00:41:31.495643 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a0e179ac-f513-4bce-8698-5c5d77bb97a6) 2025-04-13 00:41:31.497212 | orchestrator | 2025-04-13 00:41:31.942340 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:31.942461 | orchestrator | Sunday 13 April 2025 00:41:31 +0000 (0:00:00.432) 0:00:19.618 ********** 2025-04-13 00:41:31.942500 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_aad8aa45-f541-429b-bfb0-28cd3fbd229c) 2025-04-13 00:41:31.944126 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_aad8aa45-f541-429b-bfb0-28cd3fbd229c) 2025-04-13 00:41:31.944161 | orchestrator | 2025-04-13 00:41:31.944378 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:31.944847 | orchestrator | Sunday 13 April 2025 00:41:31 +0000 (0:00:00.443) 0:00:20.062 ********** 2025-04-13 00:41:32.379583 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ea334510-65a0-4c82-ab7f-212ffba0ceeb) 2025-04-13 00:41:32.380838 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ea334510-65a0-4c82-ab7f-212ffba0ceeb) 2025-04-13 00:41:32.380934 | orchestrator | 2025-04-13 00:41:32.778559 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:32.778658 | orchestrator | Sunday 13 April 2025 00:41:32 +0000 (0:00:00.438) 0:00:20.501 ********** 2025-04-13 00:41:32.778680 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-13 00:41:32.780594 | orchestrator | 2025-04-13 00:41:32.780630 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:32.780645 | orchestrator | Sunday 13 April 2025 00:41:32 +0000 (0:00:00.396) 0:00:20.897 ********** 2025-04-13 00:41:33.391993 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-04-13 00:41:33.395385 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-04-13 00:41:33.395437 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-04-13 00:41:33.395669 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-04-13 00:41:33.395699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-04-13 00:41:33.396307 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-04-13 00:41:33.398279 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-04-13 00:41:33.398926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-04-13 00:41:33.399760 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-04-13 00:41:33.400524 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-04-13 00:41:33.401263 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-04-13 00:41:33.402133 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-04-13 00:41:33.402986 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-04-13 00:41:33.403776 | orchestrator | 2025-04-13 00:41:33.404318 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:33.405079 | orchestrator | Sunday 13 April 2025 00:41:33 +0000 (0:00:00.615) 0:00:21.513 ********** 2025-04-13 00:41:33.635868 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:33.636155 | orchestrator | 2025-04-13 00:41:33.636651 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:33.637129 | orchestrator | Sunday 13 April 2025 00:41:33 +0000 (0:00:00.245) 0:00:21.759 ********** 2025-04-13 00:41:33.860975 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:33.862067 | orchestrator | 2025-04-13 00:41:33.862115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:33.862382 | orchestrator | Sunday 13 April 2025 00:41:33 +0000 (0:00:00.218) 0:00:21.978 ********** 2025-04-13 00:41:34.052547 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:34.270717 | orchestrator | 2025-04-13 00:41:34.270838 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:34.270859 | orchestrator | Sunday 13 April 2025 00:41:34 +0000 (0:00:00.192) 0:00:22.171 ********** 2025-04-13 00:41:34.270946 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:34.275617 | orchestrator | 2025-04-13 00:41:34.278418 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:34.278522 | orchestrator | Sunday 13 April 2025 00:41:34 +0000 (0:00:00.220) 0:00:22.392 ********** 2025-04-13 00:41:34.495971 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:34.496540 | orchestrator | 2025-04-13 00:41:34.496587 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:34.498094 | orchestrator | Sunday 13 April 2025 00:41:34 +0000 (0:00:00.226) 0:00:22.618 ********** 2025-04-13 00:41:34.710756 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:34.711165 | orchestrator | 2025-04-13 00:41:34.711480 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:34.712046 | orchestrator | Sunday 13 April 2025 00:41:34 +0000 (0:00:00.214) 0:00:22.833 ********** 2025-04-13 00:41:34.938097 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:34.938494 | orchestrator | 2025-04-13 00:41:34.942659 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:34.942996 | orchestrator | Sunday 13 April 2025 00:41:34 +0000 (0:00:00.225) 0:00:23.059 ********** 2025-04-13 00:41:35.162856 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:35.163925 | orchestrator | 2025-04-13 00:41:35.164249 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:35.165573 | orchestrator | Sunday 13 April 2025 00:41:35 +0000 (0:00:00.224) 0:00:23.283 ********** 2025-04-13 00:41:36.040204 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-04-13 00:41:36.040987 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-04-13 00:41:36.041907 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-04-13 00:41:36.043011 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-04-13 00:41:36.043766 | orchestrator | 2025-04-13 00:41:36.044317 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:36.047914 | orchestrator | Sunday 13 April 2025 00:41:36 +0000 (0:00:00.880) 0:00:24.163 ********** 2025-04-13 00:41:36.744971 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:36.745102 | orchestrator | 2025-04-13 00:41:36.748657 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:36.749719 | orchestrator | Sunday 13 April 2025 00:41:36 +0000 (0:00:00.701) 0:00:24.865 ********** 2025-04-13 00:41:36.937411 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:36.939066 | orchestrator | 2025-04-13 00:41:36.940069 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:36.940464 | orchestrator | Sunday 13 April 2025 00:41:36 +0000 (0:00:00.195) 0:00:25.060 ********** 2025-04-13 00:41:37.157039 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:37.159345 | orchestrator | 2025-04-13 00:41:37.159404 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:37.357293 | orchestrator | Sunday 13 April 2025 00:41:37 +0000 (0:00:00.218) 0:00:25.279 ********** 2025-04-13 00:41:37.357419 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:37.357486 | orchestrator | 2025-04-13 00:41:37.358098 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-13 00:41:37.358811 | orchestrator | Sunday 13 April 2025 00:41:37 +0000 (0:00:00.201) 0:00:25.480 ********** 2025-04-13 00:41:37.553697 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-04-13 00:41:37.556697 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-04-13 00:41:37.557148 | orchestrator | 2025-04-13 00:41:37.557178 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-13 00:41:37.558123 | orchestrator | Sunday 13 April 2025 00:41:37 +0000 (0:00:00.193) 0:00:25.674 ********** 2025-04-13 00:41:37.685549 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:37.685820 | orchestrator | 2025-04-13 00:41:37.686829 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-13 00:41:37.687544 | orchestrator | Sunday 13 April 2025 00:41:37 +0000 (0:00:00.133) 0:00:25.808 ********** 2025-04-13 00:41:37.829958 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:37.830239 | orchestrator | 2025-04-13 00:41:37.831231 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-13 00:41:37.832142 | orchestrator | Sunday 13 April 2025 00:41:37 +0000 (0:00:00.144) 0:00:25.952 ********** 2025-04-13 00:41:37.985402 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:37.985590 | orchestrator | 2025-04-13 00:41:37.986513 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-13 00:41:37.989108 | orchestrator | Sunday 13 April 2025 00:41:37 +0000 (0:00:00.154) 0:00:26.107 ********** 2025-04-13 00:41:38.126164 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:41:38.126375 | orchestrator | 2025-04-13 00:41:38.127542 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-13 00:41:38.128644 | orchestrator | Sunday 13 April 2025 00:41:38 +0000 (0:00:00.140) 0:00:26.248 ********** 2025-04-13 00:41:38.305213 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a50ad019-9a42-5399-96dd-0ec75fe99929'}}) 2025-04-13 00:41:38.306422 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'}}) 2025-04-13 00:41:38.309551 | orchestrator | 2025-04-13 00:41:38.475928 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-13 00:41:38.476068 | orchestrator | Sunday 13 April 2025 00:41:38 +0000 (0:00:00.179) 0:00:26.427 ********** 2025-04-13 00:41:38.476109 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a50ad019-9a42-5399-96dd-0ec75fe99929'}})  2025-04-13 00:41:38.672662 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'}})  2025-04-13 00:41:38.672780 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:38.672800 | orchestrator | 2025-04-13 00:41:38.672815 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-13 00:41:38.672830 | orchestrator | Sunday 13 April 2025 00:41:38 +0000 (0:00:00.168) 0:00:26.596 ********** 2025-04-13 00:41:38.672859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a50ad019-9a42-5399-96dd-0ec75fe99929'}})  2025-04-13 00:41:38.673296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'}})  2025-04-13 00:41:38.673332 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:38.675154 | orchestrator | 2025-04-13 00:41:38.675290 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-13 00:41:38.676254 | orchestrator | Sunday 13 April 2025 00:41:38 +0000 (0:00:00.198) 0:00:26.795 ********** 2025-04-13 00:41:39.063245 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a50ad019-9a42-5399-96dd-0ec75fe99929'}})  2025-04-13 00:41:39.065271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'}})  2025-04-13 00:41:39.068193 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:39.069061 | orchestrator | 2025-04-13 00:41:39.069094 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-13 00:41:39.070142 | orchestrator | Sunday 13 April 2025 00:41:39 +0000 (0:00:00.390) 0:00:27.186 ********** 2025-04-13 00:41:39.224430 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:41:39.225216 | orchestrator | 2025-04-13 00:41:39.225617 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-13 00:41:39.226627 | orchestrator | Sunday 13 April 2025 00:41:39 +0000 (0:00:00.161) 0:00:27.347 ********** 2025-04-13 00:41:39.372844 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:41:39.373807 | orchestrator | 2025-04-13 00:41:39.376088 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-13 00:41:39.376980 | orchestrator | Sunday 13 April 2025 00:41:39 +0000 (0:00:00.146) 0:00:27.493 ********** 2025-04-13 00:41:39.536406 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:39.536579 | orchestrator | 2025-04-13 00:41:39.538104 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-13 00:41:39.538906 | orchestrator | Sunday 13 April 2025 00:41:39 +0000 (0:00:00.163) 0:00:27.656 ********** 2025-04-13 00:41:39.681406 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:39.682376 | orchestrator | 2025-04-13 00:41:39.682716 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-13 00:41:39.683937 | orchestrator | Sunday 13 April 2025 00:41:39 +0000 (0:00:00.147) 0:00:27.804 ********** 2025-04-13 00:41:39.838209 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:39.839553 | orchestrator | 2025-04-13 00:41:39.840393 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-13 00:41:39.842534 | orchestrator | Sunday 13 April 2025 00:41:39 +0000 (0:00:00.156) 0:00:27.960 ********** 2025-04-13 00:41:39.984603 | orchestrator | ok: [testbed-node-4] => { 2025-04-13 00:41:39.985406 | orchestrator |  "ceph_osd_devices": { 2025-04-13 00:41:39.986404 | orchestrator |  "sdb": { 2025-04-13 00:41:39.987375 | orchestrator |  "osd_lvm_uuid": "a50ad019-9a42-5399-96dd-0ec75fe99929" 2025-04-13 00:41:39.989471 | orchestrator |  }, 2025-04-13 00:41:39.989935 | orchestrator |  "sdc": { 2025-04-13 00:41:39.989975 | orchestrator |  "osd_lvm_uuid": "c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23" 2025-04-13 00:41:39.992357 | orchestrator |  } 2025-04-13 00:41:39.992550 | orchestrator |  } 2025-04-13 00:41:39.993516 | orchestrator | } 2025-04-13 00:41:39.994122 | orchestrator | 2025-04-13 00:41:39.994792 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-13 00:41:39.994973 | orchestrator | Sunday 13 April 2025 00:41:39 +0000 (0:00:00.147) 0:00:28.107 ********** 2025-04-13 00:41:40.140476 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:40.140669 | orchestrator | 2025-04-13 00:41:40.141440 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-13 00:41:40.142607 | orchestrator | Sunday 13 April 2025 00:41:40 +0000 (0:00:00.155) 0:00:28.262 ********** 2025-04-13 00:41:40.288274 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:40.289009 | orchestrator | 2025-04-13 00:41:40.289999 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-13 00:41:40.290815 | orchestrator | Sunday 13 April 2025 00:41:40 +0000 (0:00:00.148) 0:00:28.411 ********** 2025-04-13 00:41:40.448534 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:41:40.449416 | orchestrator | 2025-04-13 00:41:40.450770 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-13 00:41:40.451371 | orchestrator | Sunday 13 April 2025 00:41:40 +0000 (0:00:00.159) 0:00:28.570 ********** 2025-04-13 00:41:40.915262 | orchestrator | changed: [testbed-node-4] => { 2025-04-13 00:41:40.917629 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-13 00:41:40.918851 | orchestrator |  "ceph_osd_devices": { 2025-04-13 00:41:40.920944 | orchestrator |  "sdb": { 2025-04-13 00:41:40.922991 | orchestrator |  "osd_lvm_uuid": "a50ad019-9a42-5399-96dd-0ec75fe99929" 2025-04-13 00:41:40.923572 | orchestrator |  }, 2025-04-13 00:41:40.924877 | orchestrator |  "sdc": { 2025-04-13 00:41:40.925292 | orchestrator |  "osd_lvm_uuid": "c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23" 2025-04-13 00:41:40.926474 | orchestrator |  } 2025-04-13 00:41:40.927381 | orchestrator |  }, 2025-04-13 00:41:40.927806 | orchestrator |  "lvm_volumes": [ 2025-04-13 00:41:40.928618 | orchestrator |  { 2025-04-13 00:41:40.929393 | orchestrator |  "data": "osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929", 2025-04-13 00:41:40.930014 | orchestrator |  "data_vg": "ceph-a50ad019-9a42-5399-96dd-0ec75fe99929" 2025-04-13 00:41:40.930708 | orchestrator |  }, 2025-04-13 00:41:40.931043 | orchestrator |  { 2025-04-13 00:41:40.931584 | orchestrator |  "data": "osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23", 2025-04-13 00:41:40.931961 | orchestrator |  "data_vg": "ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23" 2025-04-13 00:41:40.932818 | orchestrator |  } 2025-04-13 00:41:40.933328 | orchestrator |  ] 2025-04-13 00:41:40.934350 | orchestrator |  } 2025-04-13 00:41:40.934685 | orchestrator | } 2025-04-13 00:41:40.935619 | orchestrator | 2025-04-13 00:41:40.935996 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-13 00:41:40.936706 | orchestrator | Sunday 13 April 2025 00:41:40 +0000 (0:00:00.463) 0:00:29.034 ********** 2025-04-13 00:41:42.309476 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-13 00:41:42.311534 | orchestrator | 2025-04-13 00:41:42.311592 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-13 00:41:42.313361 | orchestrator | 2025-04-13 00:41:42.313483 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-13 00:41:42.314395 | orchestrator | Sunday 13 April 2025 00:41:42 +0000 (0:00:01.395) 0:00:30.429 ********** 2025-04-13 00:41:42.565455 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-13 00:41:42.566138 | orchestrator | 2025-04-13 00:41:42.569500 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-13 00:41:42.570992 | orchestrator | Sunday 13 April 2025 00:41:42 +0000 (0:00:00.256) 0:00:30.686 ********** 2025-04-13 00:41:42.808836 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:41:42.809847 | orchestrator | 2025-04-13 00:41:42.810771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:42.813040 | orchestrator | Sunday 13 April 2025 00:41:42 +0000 (0:00:00.244) 0:00:30.931 ********** 2025-04-13 00:41:43.578346 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-04-13 00:41:43.578850 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-04-13 00:41:43.578945 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-04-13 00:41:43.581538 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-04-13 00:41:43.582574 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-04-13 00:41:43.582615 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-04-13 00:41:43.582636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-04-13 00:41:43.583402 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-04-13 00:41:43.583841 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-04-13 00:41:43.584920 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-04-13 00:41:43.585307 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-04-13 00:41:43.586274 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-04-13 00:41:43.586675 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-04-13 00:41:43.587719 | orchestrator | 2025-04-13 00:41:43.589297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:43.784554 | orchestrator | Sunday 13 April 2025 00:41:43 +0000 (0:00:00.768) 0:00:31.699 ********** 2025-04-13 00:41:43.784758 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:43.785231 | orchestrator | 2025-04-13 00:41:43.786157 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:43.787129 | orchestrator | Sunday 13 April 2025 00:41:43 +0000 (0:00:00.208) 0:00:31.907 ********** 2025-04-13 00:41:44.001955 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:44.003496 | orchestrator | 2025-04-13 00:41:44.004685 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:44.007465 | orchestrator | Sunday 13 April 2025 00:41:43 +0000 (0:00:00.217) 0:00:32.125 ********** 2025-04-13 00:41:44.222833 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:44.223393 | orchestrator | 2025-04-13 00:41:44.224256 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:44.225035 | orchestrator | Sunday 13 April 2025 00:41:44 +0000 (0:00:00.220) 0:00:32.345 ********** 2025-04-13 00:41:44.430086 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:44.430271 | orchestrator | 2025-04-13 00:41:44.431501 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:44.433058 | orchestrator | Sunday 13 April 2025 00:41:44 +0000 (0:00:00.205) 0:00:32.551 ********** 2025-04-13 00:41:44.654773 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:44.657447 | orchestrator | 2025-04-13 00:41:44.657512 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:44.863182 | orchestrator | Sunday 13 April 2025 00:41:44 +0000 (0:00:00.224) 0:00:32.775 ********** 2025-04-13 00:41:44.863316 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:44.863397 | orchestrator | 2025-04-13 00:41:44.864236 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:44.865035 | orchestrator | Sunday 13 April 2025 00:41:44 +0000 (0:00:00.209) 0:00:32.985 ********** 2025-04-13 00:41:45.050972 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:45.051875 | orchestrator | 2025-04-13 00:41:45.051945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:45.052168 | orchestrator | Sunday 13 April 2025 00:41:45 +0000 (0:00:00.187) 0:00:33.172 ********** 2025-04-13 00:41:45.257270 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:45.258277 | orchestrator | 2025-04-13 00:41:45.259325 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:45.260672 | orchestrator | Sunday 13 April 2025 00:41:45 +0000 (0:00:00.206) 0:00:33.379 ********** 2025-04-13 00:41:45.897868 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8) 2025-04-13 00:41:45.898190 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8) 2025-04-13 00:41:45.898225 | orchestrator | 2025-04-13 00:41:45.898844 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:45.899001 | orchestrator | Sunday 13 April 2025 00:41:45 +0000 (0:00:00.639) 0:00:34.018 ********** 2025-04-13 00:41:46.550682 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_15f38305-5d3a-4a2a-94a9-ec4f360f12f0) 2025-04-13 00:41:46.551367 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_15f38305-5d3a-4a2a-94a9-ec4f360f12f0) 2025-04-13 00:41:46.552079 | orchestrator | 2025-04-13 00:41:46.553141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:46.555678 | orchestrator | Sunday 13 April 2025 00:41:46 +0000 (0:00:00.655) 0:00:34.673 ********** 2025-04-13 00:41:47.002138 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_466f66ff-268f-471d-abe8-9f0f353ab0cc) 2025-04-13 00:41:47.002309 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_466f66ff-268f-471d-abe8-9f0f353ab0cc) 2025-04-13 00:41:47.002957 | orchestrator | 2025-04-13 00:41:47.003224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:47.003958 | orchestrator | Sunday 13 April 2025 00:41:46 +0000 (0:00:00.450) 0:00:35.123 ********** 2025-04-13 00:41:47.424080 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d771f52a-9ada-4427-8de2-0003eafe1256) 2025-04-13 00:41:47.425140 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d771f52a-9ada-4427-8de2-0003eafe1256) 2025-04-13 00:41:47.426716 | orchestrator | 2025-04-13 00:41:47.428637 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:41:47.771202 | orchestrator | Sunday 13 April 2025 00:41:47 +0000 (0:00:00.422) 0:00:35.546 ********** 2025-04-13 00:41:47.771406 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-13 00:41:47.771491 | orchestrator | 2025-04-13 00:41:47.772308 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:47.774136 | orchestrator | Sunday 13 April 2025 00:41:47 +0000 (0:00:00.344) 0:00:35.891 ********** 2025-04-13 00:41:48.166760 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-04-13 00:41:48.167477 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-04-13 00:41:48.167772 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-04-13 00:41:48.169013 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-04-13 00:41:48.171421 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-04-13 00:41:48.171530 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-04-13 00:41:48.171553 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-04-13 00:41:48.172538 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-04-13 00:41:48.173299 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-04-13 00:41:48.174186 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-04-13 00:41:48.175012 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-04-13 00:41:48.175436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-04-13 00:41:48.176155 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-04-13 00:41:48.176622 | orchestrator | 2025-04-13 00:41:48.177253 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:48.177738 | orchestrator | Sunday 13 April 2025 00:41:48 +0000 (0:00:00.397) 0:00:36.288 ********** 2025-04-13 00:41:48.376590 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:48.376830 | orchestrator | 2025-04-13 00:41:48.377746 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:48.378312 | orchestrator | Sunday 13 April 2025 00:41:48 +0000 (0:00:00.210) 0:00:36.499 ********** 2025-04-13 00:41:48.595399 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:48.595619 | orchestrator | 2025-04-13 00:41:48.596779 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:48.599427 | orchestrator | Sunday 13 April 2025 00:41:48 +0000 (0:00:00.217) 0:00:36.717 ********** 2025-04-13 00:41:48.796604 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:48.797235 | orchestrator | 2025-04-13 00:41:48.797626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:48.798097 | orchestrator | Sunday 13 April 2025 00:41:48 +0000 (0:00:00.200) 0:00:36.918 ********** 2025-04-13 00:41:49.013260 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:49.013561 | orchestrator | 2025-04-13 00:41:49.014209 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:49.014713 | orchestrator | Sunday 13 April 2025 00:41:49 +0000 (0:00:00.217) 0:00:37.135 ********** 2025-04-13 00:41:49.595745 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:49.596596 | orchestrator | 2025-04-13 00:41:49.597383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:49.598428 | orchestrator | Sunday 13 April 2025 00:41:49 +0000 (0:00:00.582) 0:00:37.718 ********** 2025-04-13 00:41:49.802845 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:49.803514 | orchestrator | 2025-04-13 00:41:49.805997 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:50.013176 | orchestrator | Sunday 13 April 2025 00:41:49 +0000 (0:00:00.205) 0:00:37.923 ********** 2025-04-13 00:41:50.013361 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:50.013453 | orchestrator | 2025-04-13 00:41:50.013737 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:50.211035 | orchestrator | Sunday 13 April 2025 00:41:50 +0000 (0:00:00.211) 0:00:38.135 ********** 2025-04-13 00:41:50.211166 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:50.211582 | orchestrator | 2025-04-13 00:41:50.211982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:50.212017 | orchestrator | Sunday 13 April 2025 00:41:50 +0000 (0:00:00.197) 0:00:38.332 ********** 2025-04-13 00:41:50.843360 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-04-13 00:41:50.843526 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-04-13 00:41:50.843796 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-04-13 00:41:50.844310 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-04-13 00:41:50.844822 | orchestrator | 2025-04-13 00:41:50.845282 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:50.846521 | orchestrator | Sunday 13 April 2025 00:41:50 +0000 (0:00:00.631) 0:00:38.964 ********** 2025-04-13 00:41:51.044353 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:51.044738 | orchestrator | 2025-04-13 00:41:51.045289 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:51.046056 | orchestrator | Sunday 13 April 2025 00:41:51 +0000 (0:00:00.202) 0:00:39.166 ********** 2025-04-13 00:41:51.244777 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:51.245101 | orchestrator | 2025-04-13 00:41:51.245735 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:51.246208 | orchestrator | Sunday 13 April 2025 00:41:51 +0000 (0:00:00.200) 0:00:39.367 ********** 2025-04-13 00:41:51.482173 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:51.482337 | orchestrator | 2025-04-13 00:41:51.482828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:41:51.483365 | orchestrator | Sunday 13 April 2025 00:41:51 +0000 (0:00:00.237) 0:00:39.604 ********** 2025-04-13 00:41:51.695389 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:51.695856 | orchestrator | 2025-04-13 00:41:51.695923 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-13 00:41:51.699849 | orchestrator | Sunday 13 April 2025 00:41:51 +0000 (0:00:00.211) 0:00:39.816 ********** 2025-04-13 00:41:51.884829 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-04-13 00:41:51.885020 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-04-13 00:41:51.885477 | orchestrator | 2025-04-13 00:41:51.886002 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-13 00:41:51.886477 | orchestrator | Sunday 13 April 2025 00:41:51 +0000 (0:00:00.189) 0:00:40.005 ********** 2025-04-13 00:41:52.040584 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:52.041373 | orchestrator | 2025-04-13 00:41:52.042332 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-13 00:41:52.043240 | orchestrator | Sunday 13 April 2025 00:41:52 +0000 (0:00:00.157) 0:00:40.163 ********** 2025-04-13 00:41:52.406418 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:52.407824 | orchestrator | 2025-04-13 00:41:52.408665 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-13 00:41:52.409597 | orchestrator | Sunday 13 April 2025 00:41:52 +0000 (0:00:00.366) 0:00:40.529 ********** 2025-04-13 00:41:52.558538 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:52.559257 | orchestrator | 2025-04-13 00:41:52.560028 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-13 00:41:52.560658 | orchestrator | Sunday 13 April 2025 00:41:52 +0000 (0:00:00.151) 0:00:40.681 ********** 2025-04-13 00:41:52.700414 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:41:52.700777 | orchestrator | 2025-04-13 00:41:52.700819 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-13 00:41:52.701675 | orchestrator | Sunday 13 April 2025 00:41:52 +0000 (0:00:00.140) 0:00:40.821 ********** 2025-04-13 00:41:52.905732 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'}}) 2025-04-13 00:41:52.906118 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cc16a9be-1c89-5ed3-8c34-f79b9c168598'}}) 2025-04-13 00:41:52.907147 | orchestrator | 2025-04-13 00:41:52.907863 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-13 00:41:52.910011 | orchestrator | Sunday 13 April 2025 00:41:52 +0000 (0:00:00.205) 0:00:41.027 ********** 2025-04-13 00:41:53.086447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'}})  2025-04-13 00:41:53.086752 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cc16a9be-1c89-5ed3-8c34-f79b9c168598'}})  2025-04-13 00:41:53.087848 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:53.089973 | orchestrator | 2025-04-13 00:41:53.254013 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-13 00:41:53.254164 | orchestrator | Sunday 13 April 2025 00:41:53 +0000 (0:00:00.180) 0:00:41.208 ********** 2025-04-13 00:41:53.254201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'}})  2025-04-13 00:41:53.256025 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cc16a9be-1c89-5ed3-8c34-f79b9c168598'}})  2025-04-13 00:41:53.256738 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:53.258796 | orchestrator | 2025-04-13 00:41:53.259667 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-13 00:41:53.259782 | orchestrator | Sunday 13 April 2025 00:41:53 +0000 (0:00:00.167) 0:00:41.376 ********** 2025-04-13 00:41:53.430570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'}})  2025-04-13 00:41:53.431144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cc16a9be-1c89-5ed3-8c34-f79b9c168598'}})  2025-04-13 00:41:53.432491 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:53.432915 | orchestrator | 2025-04-13 00:41:53.433239 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-13 00:41:53.434317 | orchestrator | Sunday 13 April 2025 00:41:53 +0000 (0:00:00.175) 0:00:41.551 ********** 2025-04-13 00:41:53.574997 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:41:53.575394 | orchestrator | 2025-04-13 00:41:53.576081 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-13 00:41:53.576839 | orchestrator | Sunday 13 April 2025 00:41:53 +0000 (0:00:00.146) 0:00:41.697 ********** 2025-04-13 00:41:53.716615 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:41:53.717520 | orchestrator | 2025-04-13 00:41:53.718500 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-13 00:41:53.721736 | orchestrator | Sunday 13 April 2025 00:41:53 +0000 (0:00:00.141) 0:00:41.839 ********** 2025-04-13 00:41:53.872579 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:53.873206 | orchestrator | 2025-04-13 00:41:53.873255 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-13 00:41:54.004862 | orchestrator | Sunday 13 April 2025 00:41:53 +0000 (0:00:00.153) 0:00:41.992 ********** 2025-04-13 00:41:54.005055 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:54.005320 | orchestrator | 2025-04-13 00:41:54.006529 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-13 00:41:54.008657 | orchestrator | Sunday 13 April 2025 00:41:53 +0000 (0:00:00.133) 0:00:42.126 ********** 2025-04-13 00:41:54.367160 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:54.367532 | orchestrator | 2025-04-13 00:41:54.367588 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-13 00:41:54.368731 | orchestrator | Sunday 13 April 2025 00:41:54 +0000 (0:00:00.362) 0:00:42.488 ********** 2025-04-13 00:41:54.535121 | orchestrator | ok: [testbed-node-5] => { 2025-04-13 00:41:54.536148 | orchestrator |  "ceph_osd_devices": { 2025-04-13 00:41:54.537716 | orchestrator |  "sdb": { 2025-04-13 00:41:54.538845 | orchestrator |  "osd_lvm_uuid": "c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a" 2025-04-13 00:41:54.539830 | orchestrator |  }, 2025-04-13 00:41:54.540976 | orchestrator |  "sdc": { 2025-04-13 00:41:54.541971 | orchestrator |  "osd_lvm_uuid": "cc16a9be-1c89-5ed3-8c34-f79b9c168598" 2025-04-13 00:41:54.542555 | orchestrator |  } 2025-04-13 00:41:54.543492 | orchestrator |  } 2025-04-13 00:41:54.544285 | orchestrator | } 2025-04-13 00:41:54.545057 | orchestrator | 2025-04-13 00:41:54.546004 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-13 00:41:54.546682 | orchestrator | Sunday 13 April 2025 00:41:54 +0000 (0:00:00.168) 0:00:42.657 ********** 2025-04-13 00:41:54.692678 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:54.695324 | orchestrator | 2025-04-13 00:41:54.695534 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-13 00:41:54.695655 | orchestrator | Sunday 13 April 2025 00:41:54 +0000 (0:00:00.154) 0:00:42.812 ********** 2025-04-13 00:41:54.840375 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:54.840579 | orchestrator | 2025-04-13 00:41:54.841250 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-13 00:41:54.841383 | orchestrator | Sunday 13 April 2025 00:41:54 +0000 (0:00:00.149) 0:00:42.962 ********** 2025-04-13 00:41:54.972197 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:41:54.972646 | orchestrator | 2025-04-13 00:41:54.973050 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-13 00:41:54.974150 | orchestrator | Sunday 13 April 2025 00:41:54 +0000 (0:00:00.132) 0:00:43.094 ********** 2025-04-13 00:41:55.240505 | orchestrator | changed: [testbed-node-5] => { 2025-04-13 00:41:55.241041 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-13 00:41:55.242105 | orchestrator |  "ceph_osd_devices": { 2025-04-13 00:41:55.242722 | orchestrator |  "sdb": { 2025-04-13 00:41:55.242961 | orchestrator |  "osd_lvm_uuid": "c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a" 2025-04-13 00:41:55.245323 | orchestrator |  }, 2025-04-13 00:41:55.245885 | orchestrator |  "sdc": { 2025-04-13 00:41:55.246176 | orchestrator |  "osd_lvm_uuid": "cc16a9be-1c89-5ed3-8c34-f79b9c168598" 2025-04-13 00:41:55.246755 | orchestrator |  } 2025-04-13 00:41:55.246998 | orchestrator |  }, 2025-04-13 00:41:55.247546 | orchestrator |  "lvm_volumes": [ 2025-04-13 00:41:55.247883 | orchestrator |  { 2025-04-13 00:41:55.248521 | orchestrator |  "data": "osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a", 2025-04-13 00:41:55.248761 | orchestrator |  "data_vg": "ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a" 2025-04-13 00:41:55.249471 | orchestrator |  }, 2025-04-13 00:41:55.249677 | orchestrator |  { 2025-04-13 00:41:55.250456 | orchestrator |  "data": "osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598", 2025-04-13 00:41:55.253184 | orchestrator |  "data_vg": "ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598" 2025-04-13 00:41:55.253926 | orchestrator |  } 2025-04-13 00:41:55.254667 | orchestrator |  ] 2025-04-13 00:41:55.255072 | orchestrator |  } 2025-04-13 00:41:55.255403 | orchestrator | } 2025-04-13 00:41:55.256158 | orchestrator | 2025-04-13 00:41:55.256416 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-13 00:41:55.256957 | orchestrator | Sunday 13 April 2025 00:41:55 +0000 (0:00:00.268) 0:00:43.363 ********** 2025-04-13 00:41:56.344494 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-13 00:41:56.344785 | orchestrator | 2025-04-13 00:41:56.346191 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:41:56.346266 | orchestrator | 2025-04-13 00:41:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-13 00:41:56.347317 | orchestrator | 2025-04-13 00:41:56 | INFO  | Please wait and do not abort execution. 2025-04-13 00:41:56.347347 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-13 00:41:56.348824 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-13 00:41:56.349843 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-13 00:41:56.350867 | orchestrator | 2025-04-13 00:41:56.351723 | orchestrator | 2025-04-13 00:41:56.352693 | orchestrator | 2025-04-13 00:41:56.353991 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 00:41:56.355338 | orchestrator | Sunday 13 April 2025 00:41:56 +0000 (0:00:01.101) 0:00:44.464 ********** 2025-04-13 00:41:56.355784 | orchestrator | =============================================================================== 2025-04-13 00:41:56.356544 | orchestrator | Write configuration file ------------------------------------------------ 4.88s 2025-04-13 00:41:56.357289 | orchestrator | Add known links to the list of available block devices ------------------ 1.75s 2025-04-13 00:41:56.358259 | orchestrator | Add known partitions to the list of available block devices ------------- 1.43s 2025-04-13 00:41:56.359507 | orchestrator | Print configuration data ------------------------------------------------ 1.02s 2025-04-13 00:41:56.360840 | orchestrator | Generate DB VG names ---------------------------------------------------- 0.91s 2025-04-13 00:41:56.361660 | orchestrator | Add known links to the list of available block devices ------------------ 0.88s 2025-04-13 00:41:56.363083 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2025-04-13 00:41:56.364233 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.81s 2025-04-13 00:41:56.364649 | orchestrator | Print ceph_osd_devices -------------------------------------------------- 0.76s 2025-04-13 00:41:56.365409 | orchestrator | Get initial list of available block devices ----------------------------- 0.74s 2025-04-13 00:41:56.366226 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.73s 2025-04-13 00:41:56.367323 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-04-13 00:41:56.368033 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-04-13 00:41:56.368635 | orchestrator | Set DB+WAL devices config data ------------------------------------------ 0.67s 2025-04-13 00:41:56.369135 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-04-13 00:41:56.370142 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-04-13 00:41:56.370659 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-04-13 00:41:56.371360 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2025-04-13 00:41:56.372384 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.61s 2025-04-13 00:41:56.373798 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2025-04-13 00:42:08.567433 | orchestrator | 2025-04-13 00:42:08 | INFO  | Task 80cd4021-65c4-4706-b677-7d43b9413b75 is running in background. Output coming soon. 2025-04-13 00:42:45.541050 | orchestrator | 2025-04-13 00:42:36 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-04-13 00:42:47.208231 | orchestrator | 2025-04-13 00:42:36 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-04-13 00:42:47.208395 | orchestrator | 2025-04-13 00:42:36 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-04-13 00:42:47.208415 | orchestrator | 2025-04-13 00:42:37 | INFO  | Handling group overwrites in 99-overwrite 2025-04-13 00:42:47.208461 | orchestrator | 2025-04-13 00:42:37 | INFO  | Removing group ceph-mds from 50-ceph 2025-04-13 00:42:47.208489 | orchestrator | 2025-04-13 00:42:37 | INFO  | Removing group ceph-rgw from 50-ceph 2025-04-13 00:42:47.208505 | orchestrator | 2025-04-13 00:42:37 | INFO  | Removing group netbird:children from 50-infrastruture 2025-04-13 00:42:47.208520 | orchestrator | 2025-04-13 00:42:37 | INFO  | Removing group storage:children from 50-kolla 2025-04-13 00:42:47.208534 | orchestrator | 2025-04-13 00:42:37 | INFO  | Removing group frr:children from 60-generic 2025-04-13 00:42:47.208548 | orchestrator | 2025-04-13 00:42:37 | INFO  | Handling group overwrites in 20-roles 2025-04-13 00:42:47.208563 | orchestrator | 2025-04-13 00:42:37 | INFO  | Removing group k3s_node from 50-infrastruture 2025-04-13 00:42:47.208577 | orchestrator | 2025-04-13 00:42:37 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-04-13 00:42:47.208591 | orchestrator | 2025-04-13 00:42:45 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-04-13 00:42:47.208623 | orchestrator | 2025-04-13 00:42:47 | INFO  | Task 2fb65ebd-0f29-4fc0-9036-87c38f4ad9b5 (ceph-create-lvm-devices) was prepared for execution. 2025-04-13 00:42:50.146240 | orchestrator | 2025-04-13 00:42:47 | INFO  | It takes a moment until task 2fb65ebd-0f29-4fc0-9036-87c38f4ad9b5 (ceph-create-lvm-devices) has been started and output is visible here. 2025-04-13 00:42:50.146414 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-13 00:42:50.641076 | orchestrator | 2025-04-13 00:42:50.641249 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-13 00:42:50.641291 | orchestrator | 2025-04-13 00:42:50.642240 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-13 00:42:50.642382 | orchestrator | Sunday 13 April 2025 00:42:50 +0000 (0:00:00.433) 0:00:00.433 ********** 2025-04-13 00:42:50.875869 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-13 00:42:50.876349 | orchestrator | 2025-04-13 00:42:50.876878 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-13 00:42:50.877515 | orchestrator | Sunday 13 April 2025 00:42:50 +0000 (0:00:00.236) 0:00:00.669 ********** 2025-04-13 00:42:51.108791 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:42:51.109454 | orchestrator | 2025-04-13 00:42:51.109984 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:42:51.110592 | orchestrator | Sunday 13 April 2025 00:42:51 +0000 (0:00:00.232) 0:00:00.902 ********** 2025-04-13 00:42:51.822863 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-04-13 00:42:51.826693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-04-13 00:42:51.827712 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-04-13 00:42:51.828532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-04-13 00:42:51.829977 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-04-13 00:42:51.831025 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-04-13 00:42:51.831920 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-04-13 00:42:51.832850 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-04-13 00:42:51.833675 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-04-13 00:42:51.834331 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-04-13 00:42:51.834882 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-04-13 00:42:51.835404 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-04-13 00:42:51.835841 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-04-13 00:42:51.836537 | orchestrator | 2025-04-13 00:42:51.836737 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:42:51.837079 | orchestrator | Sunday 13 April 2025 00:42:51 +0000 (0:00:00.711) 0:00:01.613 ********** 2025-04-13 00:42:52.013087 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:42:52.013287 | orchestrator | 2025-04-13 00:42:52.013315 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:42:52.013952 | orchestrator | Sunday 13 April 2025 00:42:52 +0000 (0:00:00.192) 0:00:01.805 ********** 2025-04-13 00:42:52.208295 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:42:52.208685 | orchestrator | 2025-04-13 00:42:52.209527 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:42:52.211083 | orchestrator | Sunday 13 April 2025 00:42:52 +0000 (0:00:00.193) 0:00:01.999 ********** 2025-04-13 00:42:52.414376 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:42:52.414592 | orchestrator | 2025-04-13 00:42:52.415312 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:42:52.415732 | orchestrator | Sunday 13 April 2025 00:42:52 +0000 (0:00:00.209) 0:00:02.208 ********** 2025-04-13 00:42:52.615063 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:42:52.617858 | orchestrator | 2025-04-13 00:42:52.617894 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:42:52.617917 | orchestrator | Sunday 13 April 2025 00:42:52 +0000 (0:00:00.197) 0:00:02.406 ********** 2025-04-13 00:42:52.811554 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:42:52.812050 | orchestrator | 2025-04-13 00:42:52.812322 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:42:52.812416 | orchestrator | Sunday 13 April 2025 00:42:52 +0000 (0:00:00.198) 0:00:02.604 ********** 2025-04-13 00:42:53.024504 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:42:53.024691 | orchestrator | 2025-04-13 00:42:53.024721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:42:53.025136 | orchestrator | Sunday 13 April 2025 00:42:53 +0000 (0:00:00.211) 0:00:02.816 ********** 2025-04-13 00:42:53.221164 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:42:53.221371 | orchestrator | 2025-04-13 00:42:53.222589 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:42:53.223403 | orchestrator | Sunday 13 April 2025 00:42:53 +0000 (0:00:00.195) 0:00:03.012 ********** 2025-04-13 00:42:53.421857 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:42:53.422458 | orchestrator | 2025-04-13 00:42:53.422995 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:42:53.423846 | orchestrator | Sunday 13 April 2025 00:42:53 +0000 (0:00:00.203) 0:00:03.215 ********** 2025-04-13 00:42:54.069205 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099) 2025-04-13 00:42:54.069581 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099) 2025-04-13 00:42:54.070826 | orchestrator | 2025-04-13 00:42:54.071576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:42:54.072975 | orchestrator | Sunday 13 April 2025 00:42:54 +0000 (0:00:00.646) 0:00:03.862 ********** 2025-04-13 00:42:54.876319 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d62d4166-25a1-4741-94fc-59c78379b097) 2025-04-13 00:42:54.876553 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d62d4166-25a1-4741-94fc-59c78379b097) 2025-04-13 00:42:54.877079 | orchestrator | 2025-04-13 00:42:54.877568 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:42:54.878081 | orchestrator | Sunday 13 April 2025 00:42:54 +0000 (0:00:00.806) 0:00:04.669 ********** 2025-04-13 00:42:55.311982 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_24d70fc8-7961-4caf-9f39-267d5072f1bc) 2025-04-13 00:42:55.312701 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_24d70fc8-7961-4caf-9f39-267d5072f1bc) 2025-04-13 00:42:55.312754 | orchestrator | 2025-04-13 00:42:55.313977 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:42:55.737409 | orchestrator | Sunday 13 April 2025 00:42:55 +0000 (0:00:00.433) 0:00:05.103 ********** 2025-04-13 00:42:55.737620 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bd3f4097-e1b2-4e0f-b572-2003c7cd8b15) 2025-04-13 00:42:55.737756 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bd3f4097-e1b2-4e0f-b572-2003c7cd8b15) 2025-04-13 00:42:55.737825 | orchestrator | 2025-04-13 00:42:55.739060 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:42:55.739851 | orchestrator | Sunday 13 April 2025 00:42:55 +0000 (0:00:00.428) 0:00:05.531 ********** 2025-04-13 00:42:56.101405 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-13 00:42:56.102349 | orchestrator | 2025-04-13 00:42:56.103480 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:42:56.105924 | orchestrator | Sunday 13 April 2025 00:42:56 +0000 (0:00:00.363) 0:00:05.895 ********** 2025-04-13 00:42:56.576578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-04-13 00:42:56.579124 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-04-13 00:42:56.579920 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-04-13 00:42:56.580247 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-04-13 00:42:56.580758 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-04-13 00:42:56.581646 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-04-13 00:42:56.582693 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-04-13 00:42:56.583085 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-04-13 00:42:56.583578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-04-13 00:42:56.583961 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-04-13 00:42:56.585108 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-04-13 00:42:56.585520 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-04-13 00:42:56.585964 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-04-13 00:42:56.586419 | orchestrator | 2025-04-13 00:42:56.587018 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:42:56.587766 | orchestrator | Sunday 13 April 2025 00:42:56 +0000 (0:00:00.472) 0:00:06.367 ********** 2025-04-13 00:42:56.776033 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:42:56.776426 | orchestrator | 2025-04-13 00:42:56.776472 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:42:56.777414 | orchestrator | Sunday 13 April 2025 00:42:56 +0000 (0:00:00.201) 0:00:06.569 ********** 2025-04-13 00:42:56.977842 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:42:56.978137 | orchestrator | 2025-04-13 00:42:56.979007 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:42:56.979306 | orchestrator | Sunday 13 April 2025 00:42:56 +0000 (0:00:00.201) 0:00:06.771 ********** 2025-04-13 00:42:57.178165 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:42:57.178845 | orchestrator | 2025-04-13 00:42:57.178896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:42:57.179656 | orchestrator | Sunday 13 April 2025 00:42:57 +0000 (0:00:00.200) 0:00:06.971 ********** 2025-04-13 00:42:57.372881 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:42:57.373224 | orchestrator | 2025-04-13 00:42:57.375171 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:42:57.375305 | orchestrator | Sunday 13 April 2025 00:42:57 +0000 (0:00:00.192) 0:00:07.164 ********** 2025-04-13 00:42:57.949285 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:42:57.950355 | orchestrator | 2025-04-13 00:42:57.951514 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:42:57.953930 | orchestrator | Sunday 13 April 2025 00:42:57 +0000 (0:00:00.578) 0:00:07.743 ********** 2025-04-13 00:42:58.169024 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:42:58.170443 | orchestrator | 2025-04-13 00:42:58.170843 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:42:58.171590 | orchestrator | Sunday 13 April 2025 00:42:58 +0000 (0:00:00.219) 0:00:07.962 ********** 2025-04-13 00:42:58.373131 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:42:58.373861 | orchestrator | 2025-04-13 00:42:58.374628 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:42:58.375399 | orchestrator | Sunday 13 April 2025 00:42:58 +0000 (0:00:00.201) 0:00:08.164 ********** 2025-04-13 00:42:58.579097 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:42:58.582348 | orchestrator | 2025-04-13 00:42:58.582465 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:42:59.237666 | orchestrator | Sunday 13 April 2025 00:42:58 +0000 (0:00:00.206) 0:00:08.370 ********** 2025-04-13 00:42:59.237806 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-04-13 00:42:59.238502 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-04-13 00:42:59.239366 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-04-13 00:42:59.240540 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-04-13 00:42:59.241237 | orchestrator | 2025-04-13 00:42:59.242146 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:42:59.242971 | orchestrator | Sunday 13 April 2025 00:42:59 +0000 (0:00:00.660) 0:00:09.031 ********** 2025-04-13 00:42:59.446071 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:42:59.447193 | orchestrator | 2025-04-13 00:42:59.448117 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:42:59.449054 | orchestrator | Sunday 13 April 2025 00:42:59 +0000 (0:00:00.206) 0:00:09.238 ********** 2025-04-13 00:42:59.641070 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:42:59.641257 | orchestrator | 2025-04-13 00:42:59.643773 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:42:59.644976 | orchestrator | Sunday 13 April 2025 00:42:59 +0000 (0:00:00.195) 0:00:09.434 ********** 2025-04-13 00:42:59.859515 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:42:59.859694 | orchestrator | 2025-04-13 00:42:59.860974 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:42:59.861701 | orchestrator | Sunday 13 April 2025 00:42:59 +0000 (0:00:00.218) 0:00:09.652 ********** 2025-04-13 00:43:00.062366 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:00.062568 | orchestrator | 2025-04-13 00:43:00.062602 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-13 00:43:00.063705 | orchestrator | Sunday 13 April 2025 00:43:00 +0000 (0:00:00.202) 0:00:09.854 ********** 2025-04-13 00:43:00.195806 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:00.197003 | orchestrator | 2025-04-13 00:43:00.198488 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-13 00:43:00.199139 | orchestrator | Sunday 13 April 2025 00:43:00 +0000 (0:00:00.132) 0:00:09.987 ********** 2025-04-13 00:43:00.401834 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2045bad1-ab77-5a33-981a-e42fb4136085'}}) 2025-04-13 00:43:00.403661 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '075038e7-2b9c-5de1-9fc0-4ab80f908b26'}}) 2025-04-13 00:43:00.404380 | orchestrator | 2025-04-13 00:43:00.404620 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-13 00:43:00.405395 | orchestrator | Sunday 13 April 2025 00:43:00 +0000 (0:00:00.204) 0:00:10.191 ********** 2025-04-13 00:43:02.538868 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2045bad1-ab77-5a33-981a-e42fb4136085', 'data_vg': 'ceph-2045bad1-ab77-5a33-981a-e42fb4136085'}) 2025-04-13 00:43:02.542620 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26', 'data_vg': 'ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26'}) 2025-04-13 00:43:02.717050 | orchestrator | 2025-04-13 00:43:02.717162 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-13 00:43:02.717182 | orchestrator | Sunday 13 April 2025 00:43:02 +0000 (0:00:02.137) 0:00:12.329 ********** 2025-04-13 00:43:02.717233 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2045bad1-ab77-5a33-981a-e42fb4136085', 'data_vg': 'ceph-2045bad1-ab77-5a33-981a-e42fb4136085'})  2025-04-13 00:43:02.717508 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26', 'data_vg': 'ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26'})  2025-04-13 00:43:02.718914 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:02.720912 | orchestrator | 2025-04-13 00:43:02.722137 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-13 00:43:02.722379 | orchestrator | Sunday 13 April 2025 00:43:02 +0000 (0:00:00.180) 0:00:12.509 ********** 2025-04-13 00:43:04.193307 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2045bad1-ab77-5a33-981a-e42fb4136085', 'data_vg': 'ceph-2045bad1-ab77-5a33-981a-e42fb4136085'}) 2025-04-13 00:43:04.194932 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26', 'data_vg': 'ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26'}) 2025-04-13 00:43:04.195024 | orchestrator | 2025-04-13 00:43:04.195551 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-13 00:43:04.195826 | orchestrator | Sunday 13 April 2025 00:43:04 +0000 (0:00:01.475) 0:00:13.985 ********** 2025-04-13 00:43:04.372161 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2045bad1-ab77-5a33-981a-e42fb4136085', 'data_vg': 'ceph-2045bad1-ab77-5a33-981a-e42fb4136085'})  2025-04-13 00:43:04.372829 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26', 'data_vg': 'ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26'})  2025-04-13 00:43:04.372870 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:04.373684 | orchestrator | 2025-04-13 00:43:04.376032 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-13 00:43:04.508670 | orchestrator | Sunday 13 April 2025 00:43:04 +0000 (0:00:00.179) 0:00:14.164 ********** 2025-04-13 00:43:04.508781 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:04.508843 | orchestrator | 2025-04-13 00:43:04.510851 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-13 00:43:04.511626 | orchestrator | Sunday 13 April 2025 00:43:04 +0000 (0:00:00.137) 0:00:14.301 ********** 2025-04-13 00:43:04.672668 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2045bad1-ab77-5a33-981a-e42fb4136085', 'data_vg': 'ceph-2045bad1-ab77-5a33-981a-e42fb4136085'})  2025-04-13 00:43:04.672809 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26', 'data_vg': 'ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26'})  2025-04-13 00:43:04.674078 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:04.676481 | orchestrator | 2025-04-13 00:43:04.677266 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-13 00:43:04.677364 | orchestrator | Sunday 13 April 2025 00:43:04 +0000 (0:00:00.163) 0:00:14.465 ********** 2025-04-13 00:43:04.813480 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:04.814180 | orchestrator | 2025-04-13 00:43:04.814361 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-13 00:43:04.814680 | orchestrator | Sunday 13 April 2025 00:43:04 +0000 (0:00:00.142) 0:00:14.607 ********** 2025-04-13 00:43:04.978348 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2045bad1-ab77-5a33-981a-e42fb4136085', 'data_vg': 'ceph-2045bad1-ab77-5a33-981a-e42fb4136085'})  2025-04-13 00:43:04.980277 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26', 'data_vg': 'ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26'})  2025-04-13 00:43:04.980970 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:04.981440 | orchestrator | 2025-04-13 00:43:04.982282 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-13 00:43:04.983145 | orchestrator | Sunday 13 April 2025 00:43:04 +0000 (0:00:00.164) 0:00:14.771 ********** 2025-04-13 00:43:05.287182 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:05.287354 | orchestrator | 2025-04-13 00:43:05.289192 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-13 00:43:05.291046 | orchestrator | Sunday 13 April 2025 00:43:05 +0000 (0:00:00.307) 0:00:15.079 ********** 2025-04-13 00:43:05.455723 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2045bad1-ab77-5a33-981a-e42fb4136085', 'data_vg': 'ceph-2045bad1-ab77-5a33-981a-e42fb4136085'})  2025-04-13 00:43:05.457210 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26', 'data_vg': 'ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26'})  2025-04-13 00:43:05.459057 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:05.459821 | orchestrator | 2025-04-13 00:43:05.460782 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-13 00:43:05.461698 | orchestrator | Sunday 13 April 2025 00:43:05 +0000 (0:00:00.167) 0:00:15.247 ********** 2025-04-13 00:43:05.595996 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:43:05.596689 | orchestrator | 2025-04-13 00:43:05.597352 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-13 00:43:05.598179 | orchestrator | Sunday 13 April 2025 00:43:05 +0000 (0:00:00.140) 0:00:15.387 ********** 2025-04-13 00:43:05.773286 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2045bad1-ab77-5a33-981a-e42fb4136085', 'data_vg': 'ceph-2045bad1-ab77-5a33-981a-e42fb4136085'})  2025-04-13 00:43:05.773507 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26', 'data_vg': 'ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26'})  2025-04-13 00:43:05.776020 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:05.776891 | orchestrator | 2025-04-13 00:43:05.777672 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-13 00:43:05.778420 | orchestrator | Sunday 13 April 2025 00:43:05 +0000 (0:00:00.177) 0:00:15.564 ********** 2025-04-13 00:43:05.938571 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2045bad1-ab77-5a33-981a-e42fb4136085', 'data_vg': 'ceph-2045bad1-ab77-5a33-981a-e42fb4136085'})  2025-04-13 00:43:05.939213 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26', 'data_vg': 'ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26'})  2025-04-13 00:43:05.940676 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:05.941671 | orchestrator | 2025-04-13 00:43:05.942774 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-13 00:43:05.943234 | orchestrator | Sunday 13 April 2025 00:43:05 +0000 (0:00:00.166) 0:00:15.731 ********** 2025-04-13 00:43:06.104561 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2045bad1-ab77-5a33-981a-e42fb4136085', 'data_vg': 'ceph-2045bad1-ab77-5a33-981a-e42fb4136085'})  2025-04-13 00:43:06.105028 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26', 'data_vg': 'ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26'})  2025-04-13 00:43:06.105583 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:06.106421 | orchestrator | 2025-04-13 00:43:06.109766 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-13 00:43:06.263906 | orchestrator | Sunday 13 April 2025 00:43:06 +0000 (0:00:00.166) 0:00:15.898 ********** 2025-04-13 00:43:06.264090 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:06.264246 | orchestrator | 2025-04-13 00:43:06.264761 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-13 00:43:06.265009 | orchestrator | Sunday 13 April 2025 00:43:06 +0000 (0:00:00.160) 0:00:16.058 ********** 2025-04-13 00:43:06.395313 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:06.395524 | orchestrator | 2025-04-13 00:43:06.396494 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-13 00:43:06.396774 | orchestrator | Sunday 13 April 2025 00:43:06 +0000 (0:00:00.131) 0:00:16.189 ********** 2025-04-13 00:43:06.546757 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:06.546913 | orchestrator | 2025-04-13 00:43:06.547096 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-13 00:43:06.547518 | orchestrator | Sunday 13 April 2025 00:43:06 +0000 (0:00:00.151) 0:00:16.341 ********** 2025-04-13 00:43:06.679587 | orchestrator | ok: [testbed-node-3] => { 2025-04-13 00:43:06.680934 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-13 00:43:06.682103 | orchestrator | } 2025-04-13 00:43:06.683322 | orchestrator | 2025-04-13 00:43:06.683607 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-13 00:43:06.684529 | orchestrator | Sunday 13 April 2025 00:43:06 +0000 (0:00:00.130) 0:00:16.471 ********** 2025-04-13 00:43:06.821593 | orchestrator | ok: [testbed-node-3] => { 2025-04-13 00:43:06.821975 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-13 00:43:06.823382 | orchestrator | } 2025-04-13 00:43:06.824166 | orchestrator | 2025-04-13 00:43:06.825282 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-13 00:43:06.825713 | orchestrator | Sunday 13 April 2025 00:43:06 +0000 (0:00:00.143) 0:00:16.615 ********** 2025-04-13 00:43:06.956118 | orchestrator | ok: [testbed-node-3] => { 2025-04-13 00:43:06.957160 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-13 00:43:06.958677 | orchestrator | } 2025-04-13 00:43:06.959797 | orchestrator | 2025-04-13 00:43:06.961062 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-13 00:43:06.961789 | orchestrator | Sunday 13 April 2025 00:43:06 +0000 (0:00:00.133) 0:00:16.749 ********** 2025-04-13 00:43:08.063299 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:43:08.063921 | orchestrator | 2025-04-13 00:43:08.066371 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-13 00:43:08.573194 | orchestrator | Sunday 13 April 2025 00:43:08 +0000 (0:00:01.105) 0:00:17.854 ********** 2025-04-13 00:43:08.573342 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:43:08.573618 | orchestrator | 2025-04-13 00:43:08.573648 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-13 00:43:08.573670 | orchestrator | Sunday 13 April 2025 00:43:08 +0000 (0:00:00.509) 0:00:18.364 ********** 2025-04-13 00:43:09.080402 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:43:09.080637 | orchestrator | 2025-04-13 00:43:09.080663 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-13 00:43:09.080685 | orchestrator | Sunday 13 April 2025 00:43:09 +0000 (0:00:00.507) 0:00:18.872 ********** 2025-04-13 00:43:09.212702 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:43:09.213694 | orchestrator | 2025-04-13 00:43:09.216073 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-13 00:43:09.216121 | orchestrator | Sunday 13 April 2025 00:43:09 +0000 (0:00:00.132) 0:00:19.005 ********** 2025-04-13 00:43:09.335520 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:09.335813 | orchestrator | 2025-04-13 00:43:09.337779 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-13 00:43:09.340060 | orchestrator | Sunday 13 April 2025 00:43:09 +0000 (0:00:00.123) 0:00:19.128 ********** 2025-04-13 00:43:09.444067 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:09.444693 | orchestrator | 2025-04-13 00:43:09.446274 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-13 00:43:09.448166 | orchestrator | Sunday 13 April 2025 00:43:09 +0000 (0:00:00.108) 0:00:19.237 ********** 2025-04-13 00:43:09.582570 | orchestrator | ok: [testbed-node-3] => { 2025-04-13 00:43:09.583182 | orchestrator |  "vgs_report": { 2025-04-13 00:43:09.584588 | orchestrator |  "vg": [] 2025-04-13 00:43:09.585967 | orchestrator |  } 2025-04-13 00:43:09.586334 | orchestrator | } 2025-04-13 00:43:09.587084 | orchestrator | 2025-04-13 00:43:09.587443 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-13 00:43:09.587839 | orchestrator | Sunday 13 April 2025 00:43:09 +0000 (0:00:00.138) 0:00:19.375 ********** 2025-04-13 00:43:09.729069 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:09.729296 | orchestrator | 2025-04-13 00:43:09.729560 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-13 00:43:09.730570 | orchestrator | Sunday 13 April 2025 00:43:09 +0000 (0:00:00.144) 0:00:19.520 ********** 2025-04-13 00:43:09.871191 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:09.872129 | orchestrator | 2025-04-13 00:43:09.873096 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-13 00:43:09.874481 | orchestrator | Sunday 13 April 2025 00:43:09 +0000 (0:00:00.143) 0:00:19.664 ********** 2025-04-13 00:43:10.014547 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:10.015786 | orchestrator | 2025-04-13 00:43:10.016145 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-13 00:43:10.016654 | orchestrator | Sunday 13 April 2025 00:43:10 +0000 (0:00:00.144) 0:00:19.808 ********** 2025-04-13 00:43:10.166080 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:10.166355 | orchestrator | 2025-04-13 00:43:10.167391 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-13 00:43:10.168402 | orchestrator | Sunday 13 April 2025 00:43:10 +0000 (0:00:00.150) 0:00:19.959 ********** 2025-04-13 00:43:10.471336 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:10.472157 | orchestrator | 2025-04-13 00:43:10.472321 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-13 00:43:10.472340 | orchestrator | Sunday 13 April 2025 00:43:10 +0000 (0:00:00.305) 0:00:20.265 ********** 2025-04-13 00:43:10.613757 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:10.614597 | orchestrator | 2025-04-13 00:43:10.615327 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-13 00:43:10.617110 | orchestrator | Sunday 13 April 2025 00:43:10 +0000 (0:00:00.141) 0:00:20.406 ********** 2025-04-13 00:43:10.744012 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:10.747196 | orchestrator | 2025-04-13 00:43:10.747221 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-13 00:43:10.888280 | orchestrator | Sunday 13 April 2025 00:43:10 +0000 (0:00:00.129) 0:00:20.536 ********** 2025-04-13 00:43:10.888412 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:10.889444 | orchestrator | 2025-04-13 00:43:10.895008 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-13 00:43:10.895339 | orchestrator | Sunday 13 April 2025 00:43:10 +0000 (0:00:00.145) 0:00:20.681 ********** 2025-04-13 00:43:11.016434 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:11.017155 | orchestrator | 2025-04-13 00:43:11.017870 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-13 00:43:11.018518 | orchestrator | Sunday 13 April 2025 00:43:11 +0000 (0:00:00.128) 0:00:20.810 ********** 2025-04-13 00:43:11.147362 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:11.148112 | orchestrator | 2025-04-13 00:43:11.149099 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-13 00:43:11.149622 | orchestrator | Sunday 13 April 2025 00:43:11 +0000 (0:00:00.130) 0:00:20.940 ********** 2025-04-13 00:43:11.293487 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:11.294185 | orchestrator | 2025-04-13 00:43:11.295275 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-13 00:43:11.296650 | orchestrator | Sunday 13 April 2025 00:43:11 +0000 (0:00:00.145) 0:00:21.086 ********** 2025-04-13 00:43:11.434882 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:11.435265 | orchestrator | 2025-04-13 00:43:11.436513 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-13 00:43:11.439230 | orchestrator | Sunday 13 April 2025 00:43:11 +0000 (0:00:00.140) 0:00:21.226 ********** 2025-04-13 00:43:11.555723 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:11.557411 | orchestrator | 2025-04-13 00:43:11.558411 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-13 00:43:11.559452 | orchestrator | Sunday 13 April 2025 00:43:11 +0000 (0:00:00.122) 0:00:21.349 ********** 2025-04-13 00:43:11.689819 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:11.690097 | orchestrator | 2025-04-13 00:43:11.693327 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-13 00:43:11.695707 | orchestrator | Sunday 13 April 2025 00:43:11 +0000 (0:00:00.134) 0:00:21.483 ********** 2025-04-13 00:43:11.866567 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2045bad1-ab77-5a33-981a-e42fb4136085', 'data_vg': 'ceph-2045bad1-ab77-5a33-981a-e42fb4136085'})  2025-04-13 00:43:11.866938 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26', 'data_vg': 'ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26'})  2025-04-13 00:43:11.867392 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:11.867807 | orchestrator | 2025-04-13 00:43:11.868170 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-13 00:43:11.868512 | orchestrator | Sunday 13 April 2025 00:43:11 +0000 (0:00:00.176) 0:00:21.659 ********** 2025-04-13 00:43:12.016425 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2045bad1-ab77-5a33-981a-e42fb4136085', 'data_vg': 'ceph-2045bad1-ab77-5a33-981a-e42fb4136085'})  2025-04-13 00:43:12.017006 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26', 'data_vg': 'ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26'})  2025-04-13 00:43:12.017336 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:12.017771 | orchestrator | 2025-04-13 00:43:12.018190 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-13 00:43:12.018852 | orchestrator | Sunday 13 April 2025 00:43:12 +0000 (0:00:00.150) 0:00:21.810 ********** 2025-04-13 00:43:12.386134 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2045bad1-ab77-5a33-981a-e42fb4136085', 'data_vg': 'ceph-2045bad1-ab77-5a33-981a-e42fb4136085'})  2025-04-13 00:43:12.386243 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26', 'data_vg': 'ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26'})  2025-04-13 00:43:12.386255 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:12.386643 | orchestrator | 2025-04-13 00:43:12.387349 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-13 00:43:12.387793 | orchestrator | Sunday 13 April 2025 00:43:12 +0000 (0:00:00.369) 0:00:22.179 ********** 2025-04-13 00:43:12.544715 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2045bad1-ab77-5a33-981a-e42fb4136085', 'data_vg': 'ceph-2045bad1-ab77-5a33-981a-e42fb4136085'})  2025-04-13 00:43:12.544919 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26', 'data_vg': 'ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26'})  2025-04-13 00:43:12.545331 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:12.546152 | orchestrator | 2025-04-13 00:43:12.549116 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-13 00:43:12.739871 | orchestrator | Sunday 13 April 2025 00:43:12 +0000 (0:00:00.157) 0:00:22.336 ********** 2025-04-13 00:43:12.740117 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2045bad1-ab77-5a33-981a-e42fb4136085', 'data_vg': 'ceph-2045bad1-ab77-5a33-981a-e42fb4136085'})  2025-04-13 00:43:12.740212 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26', 'data_vg': 'ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26'})  2025-04-13 00:43:12.740768 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:12.741083 | orchestrator | 2025-04-13 00:43:12.741471 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-13 00:43:12.742785 | orchestrator | Sunday 13 April 2025 00:43:12 +0000 (0:00:00.196) 0:00:22.533 ********** 2025-04-13 00:43:12.936261 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2045bad1-ab77-5a33-981a-e42fb4136085', 'data_vg': 'ceph-2045bad1-ab77-5a33-981a-e42fb4136085'})  2025-04-13 00:43:12.936931 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26', 'data_vg': 'ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26'})  2025-04-13 00:43:12.936995 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:12.937013 | orchestrator | 2025-04-13 00:43:12.937037 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-13 00:43:12.937755 | orchestrator | Sunday 13 April 2025 00:43:12 +0000 (0:00:00.188) 0:00:22.722 ********** 2025-04-13 00:43:13.106706 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2045bad1-ab77-5a33-981a-e42fb4136085', 'data_vg': 'ceph-2045bad1-ab77-5a33-981a-e42fb4136085'})  2025-04-13 00:43:13.107019 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26', 'data_vg': 'ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26'})  2025-04-13 00:43:13.107209 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:13.108896 | orchestrator | 2025-04-13 00:43:13.111253 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-13 00:43:13.112212 | orchestrator | Sunday 13 April 2025 00:43:13 +0000 (0:00:00.178) 0:00:22.900 ********** 2025-04-13 00:43:13.276578 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2045bad1-ab77-5a33-981a-e42fb4136085', 'data_vg': 'ceph-2045bad1-ab77-5a33-981a-e42fb4136085'})  2025-04-13 00:43:13.277128 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26', 'data_vg': 'ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26'})  2025-04-13 00:43:13.277933 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:13.278910 | orchestrator | 2025-04-13 00:43:13.281800 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-13 00:43:13.282595 | orchestrator | Sunday 13 April 2025 00:43:13 +0000 (0:00:00.168) 0:00:23.069 ********** 2025-04-13 00:43:13.775838 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:43:13.779505 | orchestrator | 2025-04-13 00:43:13.781154 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-13 00:43:13.781728 | orchestrator | Sunday 13 April 2025 00:43:13 +0000 (0:00:00.497) 0:00:23.567 ********** 2025-04-13 00:43:14.284537 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:43:14.285395 | orchestrator | 2025-04-13 00:43:14.288332 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-13 00:43:14.445618 | orchestrator | Sunday 13 April 2025 00:43:14 +0000 (0:00:00.510) 0:00:24.077 ********** 2025-04-13 00:43:14.445757 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:43:14.451727 | orchestrator | 2025-04-13 00:43:14.453077 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-13 00:43:14.453116 | orchestrator | Sunday 13 April 2025 00:43:14 +0000 (0:00:00.153) 0:00:24.231 ********** 2025-04-13 00:43:14.628237 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26', 'vg_name': 'ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26'}) 2025-04-13 00:43:14.628403 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-2045bad1-ab77-5a33-981a-e42fb4136085', 'vg_name': 'ceph-2045bad1-ab77-5a33-981a-e42fb4136085'}) 2025-04-13 00:43:14.628426 | orchestrator | 2025-04-13 00:43:14.628448 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-13 00:43:15.020409 | orchestrator | Sunday 13 April 2025 00:43:14 +0000 (0:00:00.189) 0:00:24.420 ********** 2025-04-13 00:43:15.020573 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2045bad1-ab77-5a33-981a-e42fb4136085', 'data_vg': 'ceph-2045bad1-ab77-5a33-981a-e42fb4136085'})  2025-04-13 00:43:15.020737 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26', 'data_vg': 'ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26'})  2025-04-13 00:43:15.021331 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:15.022692 | orchestrator | 2025-04-13 00:43:15.023455 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-13 00:43:15.024351 | orchestrator | Sunday 13 April 2025 00:43:15 +0000 (0:00:00.391) 0:00:24.812 ********** 2025-04-13 00:43:15.198777 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2045bad1-ab77-5a33-981a-e42fb4136085', 'data_vg': 'ceph-2045bad1-ab77-5a33-981a-e42fb4136085'})  2025-04-13 00:43:15.199252 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26', 'data_vg': 'ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26'})  2025-04-13 00:43:15.200740 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:15.201659 | orchestrator | 2025-04-13 00:43:15.202561 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-13 00:43:15.203206 | orchestrator | Sunday 13 April 2025 00:43:15 +0000 (0:00:00.179) 0:00:24.992 ********** 2025-04-13 00:43:15.410240 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2045bad1-ab77-5a33-981a-e42fb4136085', 'data_vg': 'ceph-2045bad1-ab77-5a33-981a-e42fb4136085'})  2025-04-13 00:43:15.410449 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26', 'data_vg': 'ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26'})  2025-04-13 00:43:15.411352 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:43:15.413443 | orchestrator | 2025-04-13 00:43:15.413604 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-13 00:43:15.414072 | orchestrator | Sunday 13 April 2025 00:43:15 +0000 (0:00:00.210) 0:00:25.202 ********** 2025-04-13 00:43:16.099745 | orchestrator | ok: [testbed-node-3] => { 2025-04-13 00:43:16.100732 | orchestrator |  "lvm_report": { 2025-04-13 00:43:16.102504 | orchestrator |  "lv": [ 2025-04-13 00:43:16.102903 | orchestrator |  { 2025-04-13 00:43:16.104434 | orchestrator |  "lv_name": "osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26", 2025-04-13 00:43:16.104900 | orchestrator |  "vg_name": "ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26" 2025-04-13 00:43:16.105491 | orchestrator |  }, 2025-04-13 00:43:16.106506 | orchestrator |  { 2025-04-13 00:43:16.107838 | orchestrator |  "lv_name": "osd-block-2045bad1-ab77-5a33-981a-e42fb4136085", 2025-04-13 00:43:16.108630 | orchestrator |  "vg_name": "ceph-2045bad1-ab77-5a33-981a-e42fb4136085" 2025-04-13 00:43:16.109477 | orchestrator |  } 2025-04-13 00:43:16.110371 | orchestrator |  ], 2025-04-13 00:43:16.111005 | orchestrator |  "pv": [ 2025-04-13 00:43:16.111325 | orchestrator |  { 2025-04-13 00:43:16.112132 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-13 00:43:16.112286 | orchestrator |  "vg_name": "ceph-2045bad1-ab77-5a33-981a-e42fb4136085" 2025-04-13 00:43:16.112780 | orchestrator |  }, 2025-04-13 00:43:16.112990 | orchestrator |  { 2025-04-13 00:43:16.113219 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-13 00:43:16.113858 | orchestrator |  "vg_name": "ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26" 2025-04-13 00:43:16.114152 | orchestrator |  } 2025-04-13 00:43:16.114738 | orchestrator |  ] 2025-04-13 00:43:16.114837 | orchestrator |  } 2025-04-13 00:43:16.115187 | orchestrator | } 2025-04-13 00:43:16.115548 | orchestrator | 2025-04-13 00:43:16.116400 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-13 00:43:16.116691 | orchestrator | 2025-04-13 00:43:16.116865 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-13 00:43:16.117426 | orchestrator | Sunday 13 April 2025 00:43:16 +0000 (0:00:00.689) 0:00:25.891 ********** 2025-04-13 00:43:16.734090 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-13 00:43:16.735216 | orchestrator | 2025-04-13 00:43:16.735328 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-13 00:43:16.737736 | orchestrator | Sunday 13 April 2025 00:43:16 +0000 (0:00:00.634) 0:00:26.526 ********** 2025-04-13 00:43:16.978109 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:43:16.978299 | orchestrator | 2025-04-13 00:43:16.978647 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:16.979275 | orchestrator | Sunday 13 April 2025 00:43:16 +0000 (0:00:00.244) 0:00:26.770 ********** 2025-04-13 00:43:17.442325 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-04-13 00:43:17.442482 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-04-13 00:43:17.442503 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-04-13 00:43:17.442523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-04-13 00:43:17.443181 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-04-13 00:43:17.443964 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-04-13 00:43:17.444401 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-04-13 00:43:17.445160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-04-13 00:43:17.445685 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-04-13 00:43:17.446089 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-04-13 00:43:17.447746 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-04-13 00:43:17.448563 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-04-13 00:43:17.448854 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-04-13 00:43:17.449754 | orchestrator | 2025-04-13 00:43:17.450231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:17.450303 | orchestrator | Sunday 13 April 2025 00:43:17 +0000 (0:00:00.464) 0:00:27.234 ********** 2025-04-13 00:43:17.647684 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:17.647882 | orchestrator | 2025-04-13 00:43:17.647915 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:17.648827 | orchestrator | Sunday 13 April 2025 00:43:17 +0000 (0:00:00.205) 0:00:27.440 ********** 2025-04-13 00:43:17.834628 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:17.835184 | orchestrator | 2025-04-13 00:43:17.836714 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:17.837314 | orchestrator | Sunday 13 April 2025 00:43:17 +0000 (0:00:00.187) 0:00:27.628 ********** 2025-04-13 00:43:18.040479 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:18.041119 | orchestrator | 2025-04-13 00:43:18.041186 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:18.041280 | orchestrator | Sunday 13 April 2025 00:43:18 +0000 (0:00:00.205) 0:00:27.833 ********** 2025-04-13 00:43:18.233793 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:18.234364 | orchestrator | 2025-04-13 00:43:18.235459 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:18.236500 | orchestrator | Sunday 13 April 2025 00:43:18 +0000 (0:00:00.192) 0:00:28.026 ********** 2025-04-13 00:43:18.425916 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:18.427735 | orchestrator | 2025-04-13 00:43:18.428584 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:18.428619 | orchestrator | Sunday 13 April 2025 00:43:18 +0000 (0:00:00.192) 0:00:28.218 ********** 2025-04-13 00:43:18.629405 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:18.630927 | orchestrator | 2025-04-13 00:43:18.632758 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:18.633840 | orchestrator | Sunday 13 April 2025 00:43:18 +0000 (0:00:00.204) 0:00:28.422 ********** 2025-04-13 00:43:18.846097 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:18.846853 | orchestrator | 2025-04-13 00:43:18.848607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:18.849939 | orchestrator | Sunday 13 April 2025 00:43:18 +0000 (0:00:00.214) 0:00:28.637 ********** 2025-04-13 00:43:19.338656 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:19.339299 | orchestrator | 2025-04-13 00:43:19.340117 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:19.340659 | orchestrator | Sunday 13 April 2025 00:43:19 +0000 (0:00:00.494) 0:00:29.132 ********** 2025-04-13 00:43:19.786979 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7) 2025-04-13 00:43:19.788836 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7) 2025-04-13 00:43:19.788933 | orchestrator | 2025-04-13 00:43:19.789663 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:19.791038 | orchestrator | Sunday 13 April 2025 00:43:19 +0000 (0:00:00.445) 0:00:29.577 ********** 2025-04-13 00:43:20.292360 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a0e179ac-f513-4bce-8698-5c5d77bb97a6) 2025-04-13 00:43:20.292980 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a0e179ac-f513-4bce-8698-5c5d77bb97a6) 2025-04-13 00:43:20.293908 | orchestrator | 2025-04-13 00:43:20.296588 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:20.719986 | orchestrator | Sunday 13 April 2025 00:43:20 +0000 (0:00:00.505) 0:00:30.082 ********** 2025-04-13 00:43:20.720152 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_aad8aa45-f541-429b-bfb0-28cd3fbd229c) 2025-04-13 00:43:20.720237 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_aad8aa45-f541-429b-bfb0-28cd3fbd229c) 2025-04-13 00:43:20.720260 | orchestrator | 2025-04-13 00:43:20.720674 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:20.720868 | orchestrator | Sunday 13 April 2025 00:43:20 +0000 (0:00:00.428) 0:00:30.511 ********** 2025-04-13 00:43:21.196874 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ea334510-65a0-4c82-ab7f-212ffba0ceeb) 2025-04-13 00:43:21.198472 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ea334510-65a0-4c82-ab7f-212ffba0ceeb) 2025-04-13 00:43:21.199408 | orchestrator | 2025-04-13 00:43:21.200293 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:21.200349 | orchestrator | Sunday 13 April 2025 00:43:21 +0000 (0:00:00.476) 0:00:30.988 ********** 2025-04-13 00:43:21.524418 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-13 00:43:21.524702 | orchestrator | 2025-04-13 00:43:21.525504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:21.527812 | orchestrator | Sunday 13 April 2025 00:43:21 +0000 (0:00:00.328) 0:00:31.316 ********** 2025-04-13 00:43:22.024899 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-04-13 00:43:22.026116 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-04-13 00:43:22.027527 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-04-13 00:43:22.030618 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-04-13 00:43:22.031051 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-04-13 00:43:22.033990 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-04-13 00:43:22.034350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-04-13 00:43:22.035661 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-04-13 00:43:22.036453 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-04-13 00:43:22.037595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-04-13 00:43:22.040182 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-04-13 00:43:22.040463 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-04-13 00:43:22.040842 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-04-13 00:43:22.041430 | orchestrator | 2025-04-13 00:43:22.042169 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:22.224744 | orchestrator | Sunday 13 April 2025 00:43:22 +0000 (0:00:00.499) 0:00:31.816 ********** 2025-04-13 00:43:22.224876 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:22.225080 | orchestrator | 2025-04-13 00:43:22.225162 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:22.225901 | orchestrator | Sunday 13 April 2025 00:43:22 +0000 (0:00:00.202) 0:00:32.018 ********** 2025-04-13 00:43:22.424457 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:22.424877 | orchestrator | 2025-04-13 00:43:22.425667 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:22.426129 | orchestrator | Sunday 13 April 2025 00:43:22 +0000 (0:00:00.199) 0:00:32.217 ********** 2025-04-13 00:43:22.938351 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:22.940070 | orchestrator | 2025-04-13 00:43:22.941813 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:22.945510 | orchestrator | Sunday 13 April 2025 00:43:22 +0000 (0:00:00.512) 0:00:32.730 ********** 2025-04-13 00:43:23.140077 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:23.141333 | orchestrator | 2025-04-13 00:43:23.144619 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:23.144773 | orchestrator | Sunday 13 April 2025 00:43:23 +0000 (0:00:00.202) 0:00:32.932 ********** 2025-04-13 00:43:23.353045 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:23.353248 | orchestrator | 2025-04-13 00:43:23.357536 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:23.357730 | orchestrator | Sunday 13 April 2025 00:43:23 +0000 (0:00:00.212) 0:00:33.145 ********** 2025-04-13 00:43:23.543490 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:23.543658 | orchestrator | 2025-04-13 00:43:23.544253 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:23.547683 | orchestrator | Sunday 13 April 2025 00:43:23 +0000 (0:00:00.190) 0:00:33.335 ********** 2025-04-13 00:43:23.747674 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:23.748450 | orchestrator | 2025-04-13 00:43:23.750094 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:23.753997 | orchestrator | Sunday 13 April 2025 00:43:23 +0000 (0:00:00.205) 0:00:33.541 ********** 2025-04-13 00:43:23.941721 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:23.941899 | orchestrator | 2025-04-13 00:43:23.942422 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:23.942799 | orchestrator | Sunday 13 April 2025 00:43:23 +0000 (0:00:00.194) 0:00:33.736 ********** 2025-04-13 00:43:24.607117 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-04-13 00:43:24.607419 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-04-13 00:43:24.608489 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-04-13 00:43:24.609184 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-04-13 00:43:24.612528 | orchestrator | 2025-04-13 00:43:24.612865 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:24.613675 | orchestrator | Sunday 13 April 2025 00:43:24 +0000 (0:00:00.663) 0:00:34.399 ********** 2025-04-13 00:43:24.814541 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:24.815875 | orchestrator | 2025-04-13 00:43:24.815927 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:24.816308 | orchestrator | Sunday 13 April 2025 00:43:24 +0000 (0:00:00.207) 0:00:34.607 ********** 2025-04-13 00:43:25.037426 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:25.037730 | orchestrator | 2025-04-13 00:43:25.040075 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:25.040700 | orchestrator | Sunday 13 April 2025 00:43:25 +0000 (0:00:00.219) 0:00:34.826 ********** 2025-04-13 00:43:25.258838 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:25.259648 | orchestrator | 2025-04-13 00:43:25.259698 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:25.260548 | orchestrator | Sunday 13 April 2025 00:43:25 +0000 (0:00:00.226) 0:00:35.052 ********** 2025-04-13 00:43:25.957637 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:25.958853 | orchestrator | 2025-04-13 00:43:25.960271 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-13 00:43:25.961993 | orchestrator | Sunday 13 April 2025 00:43:25 +0000 (0:00:00.697) 0:00:35.750 ********** 2025-04-13 00:43:26.104055 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:26.104478 | orchestrator | 2025-04-13 00:43:26.105358 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-13 00:43:26.314009 | orchestrator | Sunday 13 April 2025 00:43:26 +0000 (0:00:00.147) 0:00:35.897 ********** 2025-04-13 00:43:26.314205 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a50ad019-9a42-5399-96dd-0ec75fe99929'}}) 2025-04-13 00:43:26.315644 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'}}) 2025-04-13 00:43:26.317270 | orchestrator | 2025-04-13 00:43:26.318298 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-13 00:43:26.319727 | orchestrator | Sunday 13 April 2025 00:43:26 +0000 (0:00:00.208) 0:00:36.106 ********** 2025-04-13 00:43:28.143360 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929', 'data_vg': 'ceph-a50ad019-9a42-5399-96dd-0ec75fe99929'}) 2025-04-13 00:43:28.144371 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23', 'data_vg': 'ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'}) 2025-04-13 00:43:28.147148 | orchestrator | 2025-04-13 00:43:28.149124 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-13 00:43:28.149502 | orchestrator | Sunday 13 April 2025 00:43:28 +0000 (0:00:01.825) 0:00:37.931 ********** 2025-04-13 00:43:28.315675 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929', 'data_vg': 'ceph-a50ad019-9a42-5399-96dd-0ec75fe99929'})  2025-04-13 00:43:28.316098 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23', 'data_vg': 'ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'})  2025-04-13 00:43:28.316177 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:28.317021 | orchestrator | 2025-04-13 00:43:28.317431 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-13 00:43:28.318499 | orchestrator | Sunday 13 April 2025 00:43:28 +0000 (0:00:00.172) 0:00:38.103 ********** 2025-04-13 00:43:29.618799 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929', 'data_vg': 'ceph-a50ad019-9a42-5399-96dd-0ec75fe99929'}) 2025-04-13 00:43:29.619306 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23', 'data_vg': 'ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'}) 2025-04-13 00:43:29.620639 | orchestrator | 2025-04-13 00:43:29.621544 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-13 00:43:29.623549 | orchestrator | Sunday 13 April 2025 00:43:29 +0000 (0:00:01.306) 0:00:39.410 ********** 2025-04-13 00:43:29.787366 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929', 'data_vg': 'ceph-a50ad019-9a42-5399-96dd-0ec75fe99929'})  2025-04-13 00:43:29.788483 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23', 'data_vg': 'ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'})  2025-04-13 00:43:29.789776 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:29.793988 | orchestrator | 2025-04-13 00:43:29.796140 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-13 00:43:29.796901 | orchestrator | Sunday 13 April 2025 00:43:29 +0000 (0:00:00.170) 0:00:39.580 ********** 2025-04-13 00:43:29.928512 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:29.929813 | orchestrator | 2025-04-13 00:43:29.932011 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-13 00:43:29.935119 | orchestrator | Sunday 13 April 2025 00:43:29 +0000 (0:00:00.141) 0:00:39.721 ********** 2025-04-13 00:43:30.106458 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929', 'data_vg': 'ceph-a50ad019-9a42-5399-96dd-0ec75fe99929'})  2025-04-13 00:43:30.108600 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23', 'data_vg': 'ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'})  2025-04-13 00:43:30.109377 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:30.110657 | orchestrator | 2025-04-13 00:43:30.111395 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-13 00:43:30.112594 | orchestrator | Sunday 13 April 2025 00:43:30 +0000 (0:00:00.177) 0:00:39.899 ********** 2025-04-13 00:43:30.443368 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:30.443849 | orchestrator | 2025-04-13 00:43:30.444395 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-13 00:43:30.445135 | orchestrator | Sunday 13 April 2025 00:43:30 +0000 (0:00:00.336) 0:00:40.236 ********** 2025-04-13 00:43:30.614904 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929', 'data_vg': 'ceph-a50ad019-9a42-5399-96dd-0ec75fe99929'})  2025-04-13 00:43:30.615848 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23', 'data_vg': 'ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'})  2025-04-13 00:43:30.617647 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:30.618511 | orchestrator | 2025-04-13 00:43:30.619403 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-13 00:43:30.620401 | orchestrator | Sunday 13 April 2025 00:43:30 +0000 (0:00:00.169) 0:00:40.406 ********** 2025-04-13 00:43:30.769011 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:30.769387 | orchestrator | 2025-04-13 00:43:30.769430 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-13 00:43:30.770387 | orchestrator | Sunday 13 April 2025 00:43:30 +0000 (0:00:00.154) 0:00:40.560 ********** 2025-04-13 00:43:30.966012 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929', 'data_vg': 'ceph-a50ad019-9a42-5399-96dd-0ec75fe99929'})  2025-04-13 00:43:30.966509 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23', 'data_vg': 'ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'})  2025-04-13 00:43:30.967258 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:30.968101 | orchestrator | 2025-04-13 00:43:30.968685 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-13 00:43:30.969281 | orchestrator | Sunday 13 April 2025 00:43:30 +0000 (0:00:00.199) 0:00:40.760 ********** 2025-04-13 00:43:31.116987 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:43:31.120415 | orchestrator | 2025-04-13 00:43:31.123507 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-13 00:43:31.124510 | orchestrator | Sunday 13 April 2025 00:43:31 +0000 (0:00:00.148) 0:00:40.909 ********** 2025-04-13 00:43:31.332135 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929', 'data_vg': 'ceph-a50ad019-9a42-5399-96dd-0ec75fe99929'})  2025-04-13 00:43:31.333482 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23', 'data_vg': 'ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'})  2025-04-13 00:43:31.334763 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:31.336005 | orchestrator | 2025-04-13 00:43:31.338809 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-13 00:43:31.339628 | orchestrator | Sunday 13 April 2025 00:43:31 +0000 (0:00:00.211) 0:00:41.121 ********** 2025-04-13 00:43:31.508626 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929', 'data_vg': 'ceph-a50ad019-9a42-5399-96dd-0ec75fe99929'})  2025-04-13 00:43:31.511897 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23', 'data_vg': 'ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'})  2025-04-13 00:43:31.513069 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:31.513118 | orchestrator | 2025-04-13 00:43:31.514101 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-13 00:43:31.515179 | orchestrator | Sunday 13 April 2025 00:43:31 +0000 (0:00:00.179) 0:00:41.300 ********** 2025-04-13 00:43:31.715685 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929', 'data_vg': 'ceph-a50ad019-9a42-5399-96dd-0ec75fe99929'})  2025-04-13 00:43:31.717841 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23', 'data_vg': 'ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'})  2025-04-13 00:43:31.718461 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:31.718494 | orchestrator | 2025-04-13 00:43:31.718510 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-13 00:43:31.718531 | orchestrator | Sunday 13 April 2025 00:43:31 +0000 (0:00:00.205) 0:00:41.505 ********** 2025-04-13 00:43:31.851273 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:31.851824 | orchestrator | 2025-04-13 00:43:31.855785 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-13 00:43:31.998594 | orchestrator | Sunday 13 April 2025 00:43:31 +0000 (0:00:00.138) 0:00:41.644 ********** 2025-04-13 00:43:31.998743 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:31.999529 | orchestrator | 2025-04-13 00:43:32.002578 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-13 00:43:32.146857 | orchestrator | Sunday 13 April 2025 00:43:31 +0000 (0:00:00.146) 0:00:41.790 ********** 2025-04-13 00:43:32.147072 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:32.150324 | orchestrator | 2025-04-13 00:43:32.153054 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-13 00:43:32.153899 | orchestrator | Sunday 13 April 2025 00:43:32 +0000 (0:00:00.146) 0:00:41.936 ********** 2025-04-13 00:43:32.296201 | orchestrator | ok: [testbed-node-4] => { 2025-04-13 00:43:32.297408 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-13 00:43:32.299119 | orchestrator | } 2025-04-13 00:43:32.301101 | orchestrator | 2025-04-13 00:43:32.301608 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-13 00:43:32.302799 | orchestrator | Sunday 13 April 2025 00:43:32 +0000 (0:00:00.148) 0:00:42.085 ********** 2025-04-13 00:43:32.643569 | orchestrator | ok: [testbed-node-4] => { 2025-04-13 00:43:32.644651 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-13 00:43:32.645109 | orchestrator | } 2025-04-13 00:43:32.645603 | orchestrator | 2025-04-13 00:43:32.647617 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-13 00:43:32.647933 | orchestrator | Sunday 13 April 2025 00:43:32 +0000 (0:00:00.350) 0:00:42.435 ********** 2025-04-13 00:43:32.799192 | orchestrator | ok: [testbed-node-4] => { 2025-04-13 00:43:32.799358 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-13 00:43:32.801175 | orchestrator | } 2025-04-13 00:43:32.802118 | orchestrator | 2025-04-13 00:43:32.802628 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-13 00:43:32.803603 | orchestrator | Sunday 13 April 2025 00:43:32 +0000 (0:00:00.157) 0:00:42.593 ********** 2025-04-13 00:43:33.302757 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:43:33.303396 | orchestrator | 2025-04-13 00:43:33.307782 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-13 00:43:33.308608 | orchestrator | Sunday 13 April 2025 00:43:33 +0000 (0:00:00.500) 0:00:43.093 ********** 2025-04-13 00:43:33.820528 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:43:33.821876 | orchestrator | 2025-04-13 00:43:33.821922 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-13 00:43:33.822645 | orchestrator | Sunday 13 April 2025 00:43:33 +0000 (0:00:00.519) 0:00:43.613 ********** 2025-04-13 00:43:34.329219 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:43:34.329832 | orchestrator | 2025-04-13 00:43:34.333784 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-13 00:43:34.334360 | orchestrator | Sunday 13 April 2025 00:43:34 +0000 (0:00:00.507) 0:00:44.121 ********** 2025-04-13 00:43:34.472079 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:43:34.472823 | orchestrator | 2025-04-13 00:43:34.473996 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-13 00:43:34.475918 | orchestrator | Sunday 13 April 2025 00:43:34 +0000 (0:00:00.143) 0:00:44.264 ********** 2025-04-13 00:43:34.594904 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:34.595419 | orchestrator | 2025-04-13 00:43:34.597617 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-13 00:43:34.597994 | orchestrator | Sunday 13 April 2025 00:43:34 +0000 (0:00:00.122) 0:00:44.387 ********** 2025-04-13 00:43:34.712428 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:34.713027 | orchestrator | 2025-04-13 00:43:34.713436 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-13 00:43:34.714734 | orchestrator | Sunday 13 April 2025 00:43:34 +0000 (0:00:00.118) 0:00:44.506 ********** 2025-04-13 00:43:34.857452 | orchestrator | ok: [testbed-node-4] => { 2025-04-13 00:43:34.858003 | orchestrator |  "vgs_report": { 2025-04-13 00:43:34.858091 | orchestrator |  "vg": [] 2025-04-13 00:43:34.858896 | orchestrator |  } 2025-04-13 00:43:34.860014 | orchestrator | } 2025-04-13 00:43:34.860986 | orchestrator | 2025-04-13 00:43:34.861613 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-13 00:43:34.862118 | orchestrator | Sunday 13 April 2025 00:43:34 +0000 (0:00:00.144) 0:00:44.650 ********** 2025-04-13 00:43:35.006448 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:35.006718 | orchestrator | 2025-04-13 00:43:35.006746 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-13 00:43:35.007405 | orchestrator | Sunday 13 April 2025 00:43:34 +0000 (0:00:00.144) 0:00:44.794 ********** 2025-04-13 00:43:35.135369 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:35.135844 | orchestrator | 2025-04-13 00:43:35.136592 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-13 00:43:35.137396 | orchestrator | Sunday 13 April 2025 00:43:35 +0000 (0:00:00.134) 0:00:44.929 ********** 2025-04-13 00:43:35.459653 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:35.459835 | orchestrator | 2025-04-13 00:43:35.460692 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-13 00:43:35.461525 | orchestrator | Sunday 13 April 2025 00:43:35 +0000 (0:00:00.323) 0:00:45.252 ********** 2025-04-13 00:43:35.618651 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:35.618815 | orchestrator | 2025-04-13 00:43:35.619849 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-13 00:43:35.620459 | orchestrator | Sunday 13 April 2025 00:43:35 +0000 (0:00:00.159) 0:00:45.412 ********** 2025-04-13 00:43:35.764502 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:35.765519 | orchestrator | 2025-04-13 00:43:35.766519 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-13 00:43:35.768211 | orchestrator | Sunday 13 April 2025 00:43:35 +0000 (0:00:00.141) 0:00:45.553 ********** 2025-04-13 00:43:35.902925 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:35.905263 | orchestrator | 2025-04-13 00:43:36.047843 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-13 00:43:36.048015 | orchestrator | Sunday 13 April 2025 00:43:35 +0000 (0:00:00.141) 0:00:45.694 ********** 2025-04-13 00:43:36.048062 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:36.048579 | orchestrator | 2025-04-13 00:43:36.049668 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-13 00:43:36.050551 | orchestrator | Sunday 13 April 2025 00:43:36 +0000 (0:00:00.146) 0:00:45.841 ********** 2025-04-13 00:43:36.197839 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:36.198750 | orchestrator | 2025-04-13 00:43:36.199267 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-13 00:43:36.200117 | orchestrator | Sunday 13 April 2025 00:43:36 +0000 (0:00:00.149) 0:00:45.990 ********** 2025-04-13 00:43:36.358616 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:36.359077 | orchestrator | 2025-04-13 00:43:36.359492 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-13 00:43:36.359523 | orchestrator | Sunday 13 April 2025 00:43:36 +0000 (0:00:00.160) 0:00:46.151 ********** 2025-04-13 00:43:36.503478 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:36.504023 | orchestrator | 2025-04-13 00:43:36.506205 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-13 00:43:36.506327 | orchestrator | Sunday 13 April 2025 00:43:36 +0000 (0:00:00.142) 0:00:46.294 ********** 2025-04-13 00:43:36.659352 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:36.662364 | orchestrator | 2025-04-13 00:43:36.663878 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-13 00:43:36.666550 | orchestrator | Sunday 13 April 2025 00:43:36 +0000 (0:00:00.158) 0:00:46.452 ********** 2025-04-13 00:43:36.792494 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:36.793175 | orchestrator | 2025-04-13 00:43:36.794125 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-13 00:43:36.794880 | orchestrator | Sunday 13 April 2025 00:43:36 +0000 (0:00:00.133) 0:00:46.586 ********** 2025-04-13 00:43:36.938437 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:36.940040 | orchestrator | 2025-04-13 00:43:36.941397 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-13 00:43:36.942826 | orchestrator | Sunday 13 April 2025 00:43:36 +0000 (0:00:00.145) 0:00:46.731 ********** 2025-04-13 00:43:37.094315 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:37.094670 | orchestrator | 2025-04-13 00:43:37.095587 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-13 00:43:37.096225 | orchestrator | Sunday 13 April 2025 00:43:37 +0000 (0:00:00.155) 0:00:46.887 ********** 2025-04-13 00:43:37.484814 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929', 'data_vg': 'ceph-a50ad019-9a42-5399-96dd-0ec75fe99929'})  2025-04-13 00:43:37.485103 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23', 'data_vg': 'ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'})  2025-04-13 00:43:37.486189 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:37.486513 | orchestrator | 2025-04-13 00:43:37.488253 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-13 00:43:37.656373 | orchestrator | Sunday 13 April 2025 00:43:37 +0000 (0:00:00.389) 0:00:47.277 ********** 2025-04-13 00:43:37.656553 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929', 'data_vg': 'ceph-a50ad019-9a42-5399-96dd-0ec75fe99929'})  2025-04-13 00:43:37.656633 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23', 'data_vg': 'ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'})  2025-04-13 00:43:37.656659 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:37.657537 | orchestrator | 2025-04-13 00:43:37.658140 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-13 00:43:37.658801 | orchestrator | Sunday 13 April 2025 00:43:37 +0000 (0:00:00.172) 0:00:47.449 ********** 2025-04-13 00:43:37.835232 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929', 'data_vg': 'ceph-a50ad019-9a42-5399-96dd-0ec75fe99929'})  2025-04-13 00:43:37.836239 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23', 'data_vg': 'ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'})  2025-04-13 00:43:37.837417 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:37.838328 | orchestrator | 2025-04-13 00:43:37.839194 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-13 00:43:37.839832 | orchestrator | Sunday 13 April 2025 00:43:37 +0000 (0:00:00.179) 0:00:47.628 ********** 2025-04-13 00:43:37.997373 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929', 'data_vg': 'ceph-a50ad019-9a42-5399-96dd-0ec75fe99929'})  2025-04-13 00:43:37.997710 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23', 'data_vg': 'ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'})  2025-04-13 00:43:37.998397 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:37.999328 | orchestrator | 2025-04-13 00:43:37.999879 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-13 00:43:38.000742 | orchestrator | Sunday 13 April 2025 00:43:37 +0000 (0:00:00.162) 0:00:47.790 ********** 2025-04-13 00:43:38.181803 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929', 'data_vg': 'ceph-a50ad019-9a42-5399-96dd-0ec75fe99929'})  2025-04-13 00:43:38.182702 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23', 'data_vg': 'ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'})  2025-04-13 00:43:38.183923 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:38.185074 | orchestrator | 2025-04-13 00:43:38.186138 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-13 00:43:38.187040 | orchestrator | Sunday 13 April 2025 00:43:38 +0000 (0:00:00.184) 0:00:47.975 ********** 2025-04-13 00:43:38.346509 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929', 'data_vg': 'ceph-a50ad019-9a42-5399-96dd-0ec75fe99929'})  2025-04-13 00:43:38.348155 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23', 'data_vg': 'ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'})  2025-04-13 00:43:38.351606 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:38.351707 | orchestrator | 2025-04-13 00:43:38.351731 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-13 00:43:38.352797 | orchestrator | Sunday 13 April 2025 00:43:38 +0000 (0:00:00.164) 0:00:48.140 ********** 2025-04-13 00:43:38.532712 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929', 'data_vg': 'ceph-a50ad019-9a42-5399-96dd-0ec75fe99929'})  2025-04-13 00:43:38.533035 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23', 'data_vg': 'ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'})  2025-04-13 00:43:38.533077 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:38.533535 | orchestrator | 2025-04-13 00:43:38.534004 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-13 00:43:38.534415 | orchestrator | Sunday 13 April 2025 00:43:38 +0000 (0:00:00.185) 0:00:48.325 ********** 2025-04-13 00:43:38.702425 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929', 'data_vg': 'ceph-a50ad019-9a42-5399-96dd-0ec75fe99929'})  2025-04-13 00:43:38.702833 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23', 'data_vg': 'ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'})  2025-04-13 00:43:38.703748 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:38.705083 | orchestrator | 2025-04-13 00:43:38.706252 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-13 00:43:38.707098 | orchestrator | Sunday 13 April 2025 00:43:38 +0000 (0:00:00.168) 0:00:48.493 ********** 2025-04-13 00:43:39.196260 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:43:39.196467 | orchestrator | 2025-04-13 00:43:39.197811 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-13 00:43:39.199251 | orchestrator | Sunday 13 April 2025 00:43:39 +0000 (0:00:00.494) 0:00:48.988 ********** 2025-04-13 00:43:39.718497 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:43:39.719464 | orchestrator | 2025-04-13 00:43:39.720699 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-13 00:43:39.722295 | orchestrator | Sunday 13 April 2025 00:43:39 +0000 (0:00:00.521) 0:00:49.510 ********** 2025-04-13 00:43:40.067274 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:43:40.067766 | orchestrator | 2025-04-13 00:43:40.067811 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-13 00:43:40.068637 | orchestrator | Sunday 13 April 2025 00:43:40 +0000 (0:00:00.350) 0:00:49.860 ********** 2025-04-13 00:43:40.248763 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929', 'vg_name': 'ceph-a50ad019-9a42-5399-96dd-0ec75fe99929'}) 2025-04-13 00:43:40.249130 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23', 'vg_name': 'ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'}) 2025-04-13 00:43:40.249938 | orchestrator | 2025-04-13 00:43:40.250773 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-13 00:43:40.251571 | orchestrator | Sunday 13 April 2025 00:43:40 +0000 (0:00:00.182) 0:00:50.042 ********** 2025-04-13 00:43:40.433347 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929', 'data_vg': 'ceph-a50ad019-9a42-5399-96dd-0ec75fe99929'})  2025-04-13 00:43:40.433523 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23', 'data_vg': 'ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'})  2025-04-13 00:43:40.433553 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:40.434321 | orchestrator | 2025-04-13 00:43:40.435333 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-13 00:43:40.435613 | orchestrator | Sunday 13 April 2025 00:43:40 +0000 (0:00:00.181) 0:00:50.224 ********** 2025-04-13 00:43:40.608647 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929', 'data_vg': 'ceph-a50ad019-9a42-5399-96dd-0ec75fe99929'})  2025-04-13 00:43:40.610267 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23', 'data_vg': 'ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'})  2025-04-13 00:43:40.611640 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:40.612316 | orchestrator | 2025-04-13 00:43:40.613361 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-13 00:43:40.614875 | orchestrator | Sunday 13 April 2025 00:43:40 +0000 (0:00:00.177) 0:00:50.402 ********** 2025-04-13 00:43:40.780826 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929', 'data_vg': 'ceph-a50ad019-9a42-5399-96dd-0ec75fe99929'})  2025-04-13 00:43:40.781730 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23', 'data_vg': 'ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'})  2025-04-13 00:43:40.783335 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:43:40.784163 | orchestrator | 2025-04-13 00:43:40.785201 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-13 00:43:40.785708 | orchestrator | Sunday 13 April 2025 00:43:40 +0000 (0:00:00.172) 0:00:50.574 ********** 2025-04-13 00:43:41.634439 | orchestrator | ok: [testbed-node-4] => { 2025-04-13 00:43:41.635228 | orchestrator |  "lvm_report": { 2025-04-13 00:43:41.637697 | orchestrator |  "lv": [ 2025-04-13 00:43:41.638670 | orchestrator |  { 2025-04-13 00:43:41.639158 | orchestrator |  "lv_name": "osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929", 2025-04-13 00:43:41.640144 | orchestrator |  "vg_name": "ceph-a50ad019-9a42-5399-96dd-0ec75fe99929" 2025-04-13 00:43:41.641125 | orchestrator |  }, 2025-04-13 00:43:41.641459 | orchestrator |  { 2025-04-13 00:43:41.642264 | orchestrator |  "lv_name": "osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23", 2025-04-13 00:43:41.643060 | orchestrator |  "vg_name": "ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23" 2025-04-13 00:43:41.643490 | orchestrator |  } 2025-04-13 00:43:41.644172 | orchestrator |  ], 2025-04-13 00:43:41.644945 | orchestrator |  "pv": [ 2025-04-13 00:43:41.645947 | orchestrator |  { 2025-04-13 00:43:41.646744 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-13 00:43:41.647079 | orchestrator |  "vg_name": "ceph-a50ad019-9a42-5399-96dd-0ec75fe99929" 2025-04-13 00:43:41.647761 | orchestrator |  }, 2025-04-13 00:43:41.648122 | orchestrator |  { 2025-04-13 00:43:41.648576 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-13 00:43:41.649372 | orchestrator |  "vg_name": "ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23" 2025-04-13 00:43:41.649757 | orchestrator |  } 2025-04-13 00:43:41.649790 | orchestrator |  ] 2025-04-13 00:43:41.650399 | orchestrator |  } 2025-04-13 00:43:41.650669 | orchestrator | } 2025-04-13 00:43:41.651051 | orchestrator | 2025-04-13 00:43:41.651379 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-13 00:43:41.651730 | orchestrator | 2025-04-13 00:43:41.652004 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-13 00:43:41.652365 | orchestrator | Sunday 13 April 2025 00:43:41 +0000 (0:00:00.851) 0:00:51.426 ********** 2025-04-13 00:43:41.926325 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-13 00:43:41.929261 | orchestrator | 2025-04-13 00:43:41.930171 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-13 00:43:41.930214 | orchestrator | Sunday 13 April 2025 00:43:41 +0000 (0:00:00.286) 0:00:51.712 ********** 2025-04-13 00:43:42.191476 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:43:42.192486 | orchestrator | 2025-04-13 00:43:42.194082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:42.196489 | orchestrator | Sunday 13 April 2025 00:43:42 +0000 (0:00:00.272) 0:00:51.984 ********** 2025-04-13 00:43:42.675702 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-04-13 00:43:42.678584 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-04-13 00:43:42.679770 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-04-13 00:43:42.679848 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-04-13 00:43:42.679886 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-04-13 00:43:42.680934 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-04-13 00:43:42.681849 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-04-13 00:43:42.682594 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-04-13 00:43:42.684190 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-04-13 00:43:42.684516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-04-13 00:43:42.684554 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-04-13 00:43:42.685201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-04-13 00:43:42.685953 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-04-13 00:43:42.686363 | orchestrator | 2025-04-13 00:43:42.687067 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:42.687545 | orchestrator | Sunday 13 April 2025 00:43:42 +0000 (0:00:00.481) 0:00:52.466 ********** 2025-04-13 00:43:42.889938 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:42.890365 | orchestrator | 2025-04-13 00:43:42.891821 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:42.892278 | orchestrator | Sunday 13 April 2025 00:43:42 +0000 (0:00:00.216) 0:00:52.682 ********** 2025-04-13 00:43:43.091578 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:43.091787 | orchestrator | 2025-04-13 00:43:43.091820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:43.092187 | orchestrator | Sunday 13 April 2025 00:43:43 +0000 (0:00:00.202) 0:00:52.884 ********** 2025-04-13 00:43:43.289162 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:43.290748 | orchestrator | 2025-04-13 00:43:43.291888 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:43.293332 | orchestrator | Sunday 13 April 2025 00:43:43 +0000 (0:00:00.197) 0:00:53.082 ********** 2025-04-13 00:43:43.488547 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:43.489165 | orchestrator | 2025-04-13 00:43:43.489620 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:43.490129 | orchestrator | Sunday 13 April 2025 00:43:43 +0000 (0:00:00.199) 0:00:53.282 ********** 2025-04-13 00:43:43.685026 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:43.685370 | orchestrator | 2025-04-13 00:43:43.685778 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:43.687176 | orchestrator | Sunday 13 April 2025 00:43:43 +0000 (0:00:00.196) 0:00:53.478 ********** 2025-04-13 00:43:44.103608 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:44.103840 | orchestrator | 2025-04-13 00:43:44.103879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:44.104736 | orchestrator | Sunday 13 April 2025 00:43:44 +0000 (0:00:00.418) 0:00:53.896 ********** 2025-04-13 00:43:44.324476 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:44.325282 | orchestrator | 2025-04-13 00:43:44.325906 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:44.327378 | orchestrator | Sunday 13 April 2025 00:43:44 +0000 (0:00:00.221) 0:00:54.117 ********** 2025-04-13 00:43:44.534082 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:44.534342 | orchestrator | 2025-04-13 00:43:44.534733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:44.535393 | orchestrator | Sunday 13 April 2025 00:43:44 +0000 (0:00:00.209) 0:00:54.326 ********** 2025-04-13 00:43:44.958645 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8) 2025-04-13 00:43:44.958917 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8) 2025-04-13 00:43:44.959755 | orchestrator | 2025-04-13 00:43:44.961213 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:44.961642 | orchestrator | Sunday 13 April 2025 00:43:44 +0000 (0:00:00.424) 0:00:54.751 ********** 2025-04-13 00:43:45.390934 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_15f38305-5d3a-4a2a-94a9-ec4f360f12f0) 2025-04-13 00:43:45.391180 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_15f38305-5d3a-4a2a-94a9-ec4f360f12f0) 2025-04-13 00:43:45.391651 | orchestrator | 2025-04-13 00:43:45.392147 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:45.392580 | orchestrator | Sunday 13 April 2025 00:43:45 +0000 (0:00:00.433) 0:00:55.184 ********** 2025-04-13 00:43:45.832441 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_466f66ff-268f-471d-abe8-9f0f353ab0cc) 2025-04-13 00:43:45.832734 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_466f66ff-268f-471d-abe8-9f0f353ab0cc) 2025-04-13 00:43:45.833446 | orchestrator | 2025-04-13 00:43:45.833803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:45.834251 | orchestrator | Sunday 13 April 2025 00:43:45 +0000 (0:00:00.438) 0:00:55.623 ********** 2025-04-13 00:43:46.282148 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d771f52a-9ada-4427-8de2-0003eafe1256) 2025-04-13 00:43:46.282849 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d771f52a-9ada-4427-8de2-0003eafe1256) 2025-04-13 00:43:46.284425 | orchestrator | 2025-04-13 00:43:46.285426 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-13 00:43:46.286378 | orchestrator | Sunday 13 April 2025 00:43:46 +0000 (0:00:00.451) 0:00:56.075 ********** 2025-04-13 00:43:46.636608 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-13 00:43:46.637375 | orchestrator | 2025-04-13 00:43:46.639473 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:46.640324 | orchestrator | Sunday 13 April 2025 00:43:46 +0000 (0:00:00.352) 0:00:56.427 ********** 2025-04-13 00:43:47.123389 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-04-13 00:43:47.123870 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-04-13 00:43:47.125036 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-04-13 00:43:47.125272 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-04-13 00:43:47.126296 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-04-13 00:43:47.127173 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-04-13 00:43:47.127806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-04-13 00:43:47.128748 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-04-13 00:43:47.129043 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-04-13 00:43:47.129826 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-04-13 00:43:47.130254 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-04-13 00:43:47.131022 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-04-13 00:43:47.131322 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-04-13 00:43:47.132185 | orchestrator | 2025-04-13 00:43:47.132450 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:47.132490 | orchestrator | Sunday 13 April 2025 00:43:47 +0000 (0:00:00.488) 0:00:56.916 ********** 2025-04-13 00:43:47.685708 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:47.685948 | orchestrator | 2025-04-13 00:43:47.686305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:47.687124 | orchestrator | Sunday 13 April 2025 00:43:47 +0000 (0:00:00.561) 0:00:57.477 ********** 2025-04-13 00:43:47.900126 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:47.900369 | orchestrator | 2025-04-13 00:43:47.901066 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:47.901449 | orchestrator | Sunday 13 April 2025 00:43:47 +0000 (0:00:00.216) 0:00:57.693 ********** 2025-04-13 00:43:48.106633 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:48.320674 | orchestrator | 2025-04-13 00:43:48.320821 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:48.320853 | orchestrator | Sunday 13 April 2025 00:43:48 +0000 (0:00:00.206) 0:00:57.900 ********** 2025-04-13 00:43:48.320901 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:48.321175 | orchestrator | 2025-04-13 00:43:48.568184 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:48.568363 | orchestrator | Sunday 13 April 2025 00:43:48 +0000 (0:00:00.214) 0:00:58.114 ********** 2025-04-13 00:43:48.568422 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:48.568527 | orchestrator | 2025-04-13 00:43:48.569299 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:48.570008 | orchestrator | Sunday 13 April 2025 00:43:48 +0000 (0:00:00.246) 0:00:58.361 ********** 2025-04-13 00:43:48.761005 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:48.761866 | orchestrator | 2025-04-13 00:43:48.762341 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:48.763191 | orchestrator | Sunday 13 April 2025 00:43:48 +0000 (0:00:00.193) 0:00:58.554 ********** 2025-04-13 00:43:48.972292 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:48.972542 | orchestrator | 2025-04-13 00:43:48.975112 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:48.978291 | orchestrator | Sunday 13 April 2025 00:43:48 +0000 (0:00:00.210) 0:00:58.765 ********** 2025-04-13 00:43:49.172304 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:49.172697 | orchestrator | 2025-04-13 00:43:49.173609 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:49.175033 | orchestrator | Sunday 13 April 2025 00:43:49 +0000 (0:00:00.200) 0:00:58.965 ********** 2025-04-13 00:43:50.054839 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-04-13 00:43:50.055092 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-04-13 00:43:50.056508 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-04-13 00:43:50.057798 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-04-13 00:43:50.058480 | orchestrator | 2025-04-13 00:43:50.060584 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:50.061440 | orchestrator | Sunday 13 April 2025 00:43:50 +0000 (0:00:00.879) 0:00:59.845 ********** 2025-04-13 00:43:50.255151 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:50.255581 | orchestrator | 2025-04-13 00:43:50.255624 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:50.256349 | orchestrator | Sunday 13 April 2025 00:43:50 +0000 (0:00:00.203) 0:01:00.048 ********** 2025-04-13 00:43:50.911706 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:50.911874 | orchestrator | 2025-04-13 00:43:50.912772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:50.915146 | orchestrator | Sunday 13 April 2025 00:43:50 +0000 (0:00:00.655) 0:01:00.703 ********** 2025-04-13 00:43:51.101426 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:51.101686 | orchestrator | 2025-04-13 00:43:51.102455 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-13 00:43:51.103535 | orchestrator | Sunday 13 April 2025 00:43:51 +0000 (0:00:00.190) 0:01:00.894 ********** 2025-04-13 00:43:51.308425 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:51.308561 | orchestrator | 2025-04-13 00:43:51.309411 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-13 00:43:51.312871 | orchestrator | Sunday 13 April 2025 00:43:51 +0000 (0:00:00.207) 0:01:01.101 ********** 2025-04-13 00:43:51.452114 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:51.453857 | orchestrator | 2025-04-13 00:43:51.453915 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-13 00:43:51.454299 | orchestrator | Sunday 13 April 2025 00:43:51 +0000 (0:00:00.143) 0:01:01.245 ********** 2025-04-13 00:43:51.673430 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'}}) 2025-04-13 00:43:51.673938 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cc16a9be-1c89-5ed3-8c34-f79b9c168598'}}) 2025-04-13 00:43:51.674732 | orchestrator | 2025-04-13 00:43:51.677216 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-13 00:43:53.530191 | orchestrator | Sunday 13 April 2025 00:43:51 +0000 (0:00:00.220) 0:01:01.465 ********** 2025-04-13 00:43:53.530346 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a', 'data_vg': 'ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'}) 2025-04-13 00:43:53.530668 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598', 'data_vg': 'ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598'}) 2025-04-13 00:43:53.532235 | orchestrator | 2025-04-13 00:43:53.533162 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-13 00:43:53.533206 | orchestrator | Sunday 13 April 2025 00:43:53 +0000 (0:00:01.855) 0:01:03.321 ********** 2025-04-13 00:43:53.710781 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a', 'data_vg': 'ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'})  2025-04-13 00:43:53.713597 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598', 'data_vg': 'ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598'})  2025-04-13 00:43:53.713843 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:53.713881 | orchestrator | 2025-04-13 00:43:53.714867 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-13 00:43:53.716013 | orchestrator | Sunday 13 April 2025 00:43:53 +0000 (0:00:00.180) 0:01:03.502 ********** 2025-04-13 00:43:55.028484 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a', 'data_vg': 'ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'}) 2025-04-13 00:43:55.028751 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598', 'data_vg': 'ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598'}) 2025-04-13 00:43:55.030354 | orchestrator | 2025-04-13 00:43:55.030898 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-13 00:43:55.031786 | orchestrator | Sunday 13 April 2025 00:43:55 +0000 (0:00:01.317) 0:01:04.820 ********** 2025-04-13 00:43:55.187403 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a', 'data_vg': 'ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'})  2025-04-13 00:43:55.187716 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598', 'data_vg': 'ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598'})  2025-04-13 00:43:55.187958 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:55.188867 | orchestrator | 2025-04-13 00:43:55.189453 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-13 00:43:55.191564 | orchestrator | Sunday 13 April 2025 00:43:55 +0000 (0:00:00.159) 0:01:04.979 ********** 2025-04-13 00:43:55.524009 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:55.525026 | orchestrator | 2025-04-13 00:43:55.525346 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-13 00:43:55.525397 | orchestrator | Sunday 13 April 2025 00:43:55 +0000 (0:00:00.337) 0:01:05.317 ********** 2025-04-13 00:43:55.695547 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a', 'data_vg': 'ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'})  2025-04-13 00:43:55.696236 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598', 'data_vg': 'ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598'})  2025-04-13 00:43:55.697394 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:55.700021 | orchestrator | 2025-04-13 00:43:55.701136 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-13 00:43:55.701167 | orchestrator | Sunday 13 April 2025 00:43:55 +0000 (0:00:00.169) 0:01:05.487 ********** 2025-04-13 00:43:55.865856 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:55.868463 | orchestrator | 2025-04-13 00:43:55.868661 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-13 00:43:55.869047 | orchestrator | Sunday 13 April 2025 00:43:55 +0000 (0:00:00.170) 0:01:05.658 ********** 2025-04-13 00:43:56.054154 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a', 'data_vg': 'ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'})  2025-04-13 00:43:56.055039 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598', 'data_vg': 'ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598'})  2025-04-13 00:43:56.058799 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:56.206116 | orchestrator | 2025-04-13 00:43:56.206238 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-13 00:43:56.206256 | orchestrator | Sunday 13 April 2025 00:43:56 +0000 (0:00:00.188) 0:01:05.847 ********** 2025-04-13 00:43:56.206288 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:56.206801 | orchestrator | 2025-04-13 00:43:56.207960 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-13 00:43:56.209119 | orchestrator | Sunday 13 April 2025 00:43:56 +0000 (0:00:00.151) 0:01:05.998 ********** 2025-04-13 00:43:56.384682 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a', 'data_vg': 'ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'})  2025-04-13 00:43:56.384892 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598', 'data_vg': 'ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598'})  2025-04-13 00:43:56.384925 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:56.385623 | orchestrator | 2025-04-13 00:43:56.386317 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-13 00:43:56.386604 | orchestrator | Sunday 13 April 2025 00:43:56 +0000 (0:00:00.180) 0:01:06.178 ********** 2025-04-13 00:43:56.542208 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:43:56.542409 | orchestrator | 2025-04-13 00:43:56.542916 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-13 00:43:56.543537 | orchestrator | Sunday 13 April 2025 00:43:56 +0000 (0:00:00.157) 0:01:06.335 ********** 2025-04-13 00:43:56.710609 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a', 'data_vg': 'ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'})  2025-04-13 00:43:56.711514 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598', 'data_vg': 'ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598'})  2025-04-13 00:43:56.713337 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:56.714615 | orchestrator | 2025-04-13 00:43:56.715562 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-13 00:43:56.716748 | orchestrator | Sunday 13 April 2025 00:43:56 +0000 (0:00:00.166) 0:01:06.502 ********** 2025-04-13 00:43:56.885498 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a', 'data_vg': 'ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'})  2025-04-13 00:43:56.886727 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598', 'data_vg': 'ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598'})  2025-04-13 00:43:56.888351 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:56.888545 | orchestrator | 2025-04-13 00:43:56.889386 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-13 00:43:56.890099 | orchestrator | Sunday 13 April 2025 00:43:56 +0000 (0:00:00.176) 0:01:06.679 ********** 2025-04-13 00:43:57.056341 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a', 'data_vg': 'ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'})  2025-04-13 00:43:57.057019 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598', 'data_vg': 'ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598'})  2025-04-13 00:43:57.057463 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:57.057892 | orchestrator | 2025-04-13 00:43:57.058407 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-13 00:43:57.059369 | orchestrator | Sunday 13 April 2025 00:43:57 +0000 (0:00:00.170) 0:01:06.849 ********** 2025-04-13 00:43:57.194090 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:57.194529 | orchestrator | 2025-04-13 00:43:57.195629 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-13 00:43:57.198411 | orchestrator | Sunday 13 April 2025 00:43:57 +0000 (0:00:00.137) 0:01:06.987 ********** 2025-04-13 00:43:57.549523 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:57.550543 | orchestrator | 2025-04-13 00:43:57.553172 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-13 00:43:57.694534 | orchestrator | Sunday 13 April 2025 00:43:57 +0000 (0:00:00.354) 0:01:07.341 ********** 2025-04-13 00:43:57.694669 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:57.695630 | orchestrator | 2025-04-13 00:43:57.695945 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-13 00:43:57.696821 | orchestrator | Sunday 13 April 2025 00:43:57 +0000 (0:00:00.145) 0:01:07.487 ********** 2025-04-13 00:43:57.852131 | orchestrator | ok: [testbed-node-5] => { 2025-04-13 00:43:57.852582 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-13 00:43:57.853798 | orchestrator | } 2025-04-13 00:43:57.854183 | orchestrator | 2025-04-13 00:43:57.855167 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-13 00:43:57.856121 | orchestrator | Sunday 13 April 2025 00:43:57 +0000 (0:00:00.155) 0:01:07.643 ********** 2025-04-13 00:43:57.997916 | orchestrator | ok: [testbed-node-5] => { 2025-04-13 00:43:57.999253 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-13 00:43:57.999463 | orchestrator | } 2025-04-13 00:43:58.000664 | orchestrator | 2025-04-13 00:43:58.001572 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-13 00:43:58.002263 | orchestrator | Sunday 13 April 2025 00:43:57 +0000 (0:00:00.148) 0:01:07.791 ********** 2025-04-13 00:43:58.152901 | orchestrator | ok: [testbed-node-5] => { 2025-04-13 00:43:58.154000 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-13 00:43:58.155106 | orchestrator | } 2025-04-13 00:43:58.155230 | orchestrator | 2025-04-13 00:43:58.155525 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-13 00:43:58.156091 | orchestrator | Sunday 13 April 2025 00:43:58 +0000 (0:00:00.154) 0:01:07.946 ********** 2025-04-13 00:43:58.659051 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:43:58.659272 | orchestrator | 2025-04-13 00:43:58.659635 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-13 00:43:58.661027 | orchestrator | Sunday 13 April 2025 00:43:58 +0000 (0:00:00.505) 0:01:08.451 ********** 2025-04-13 00:43:59.157146 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:43:59.157652 | orchestrator | 2025-04-13 00:43:59.157735 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-13 00:43:59.158103 | orchestrator | Sunday 13 April 2025 00:43:59 +0000 (0:00:00.496) 0:01:08.948 ********** 2025-04-13 00:43:59.666609 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:43:59.666775 | orchestrator | 2025-04-13 00:43:59.666803 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-13 00:43:59.667211 | orchestrator | Sunday 13 April 2025 00:43:59 +0000 (0:00:00.511) 0:01:09.459 ********** 2025-04-13 00:43:59.811928 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:43:59.813272 | orchestrator | 2025-04-13 00:43:59.817333 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-13 00:43:59.818133 | orchestrator | Sunday 13 April 2025 00:43:59 +0000 (0:00:00.144) 0:01:09.604 ********** 2025-04-13 00:43:59.940851 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:43:59.941105 | orchestrator | 2025-04-13 00:43:59.942090 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-13 00:43:59.943291 | orchestrator | Sunday 13 April 2025 00:43:59 +0000 (0:00:00.129) 0:01:09.734 ********** 2025-04-13 00:44:00.063873 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:00.064779 | orchestrator | 2025-04-13 00:44:00.065162 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-13 00:44:00.066285 | orchestrator | Sunday 13 April 2025 00:44:00 +0000 (0:00:00.122) 0:01:09.857 ********** 2025-04-13 00:44:00.405860 | orchestrator | ok: [testbed-node-5] => { 2025-04-13 00:44:00.406146 | orchestrator |  "vgs_report": { 2025-04-13 00:44:00.406926 | orchestrator |  "vg": [] 2025-04-13 00:44:00.409583 | orchestrator |  } 2025-04-13 00:44:00.410333 | orchestrator | } 2025-04-13 00:44:00.411141 | orchestrator | 2025-04-13 00:44:00.412217 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-13 00:44:00.412821 | orchestrator | Sunday 13 April 2025 00:44:00 +0000 (0:00:00.342) 0:01:10.199 ********** 2025-04-13 00:44:00.547733 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:00.549260 | orchestrator | 2025-04-13 00:44:00.549681 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-13 00:44:00.550962 | orchestrator | Sunday 13 April 2025 00:44:00 +0000 (0:00:00.134) 0:01:10.334 ********** 2025-04-13 00:44:00.686611 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:00.686904 | orchestrator | 2025-04-13 00:44:00.688174 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-13 00:44:00.689787 | orchestrator | Sunday 13 April 2025 00:44:00 +0000 (0:00:00.145) 0:01:10.479 ********** 2025-04-13 00:44:00.834551 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:00.835739 | orchestrator | 2025-04-13 00:44:00.838145 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-13 00:44:00.992316 | orchestrator | Sunday 13 April 2025 00:44:00 +0000 (0:00:00.146) 0:01:10.626 ********** 2025-04-13 00:44:00.992468 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:00.992606 | orchestrator | 2025-04-13 00:44:00.993121 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-13 00:44:00.994204 | orchestrator | Sunday 13 April 2025 00:44:00 +0000 (0:00:00.159) 0:01:10.785 ********** 2025-04-13 00:44:01.127039 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:01.128122 | orchestrator | 2025-04-13 00:44:01.128963 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-13 00:44:01.129011 | orchestrator | Sunday 13 April 2025 00:44:01 +0000 (0:00:00.134) 0:01:10.920 ********** 2025-04-13 00:44:01.277803 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:01.278110 | orchestrator | 2025-04-13 00:44:01.278151 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-13 00:44:01.278553 | orchestrator | Sunday 13 April 2025 00:44:01 +0000 (0:00:00.151) 0:01:11.071 ********** 2025-04-13 00:44:01.420467 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:01.421145 | orchestrator | 2025-04-13 00:44:01.421557 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-13 00:44:01.422570 | orchestrator | Sunday 13 April 2025 00:44:01 +0000 (0:00:00.141) 0:01:11.213 ********** 2025-04-13 00:44:01.575301 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:01.575791 | orchestrator | 2025-04-13 00:44:01.577024 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-13 00:44:01.577286 | orchestrator | Sunday 13 April 2025 00:44:01 +0000 (0:00:00.154) 0:01:11.368 ********** 2025-04-13 00:44:01.765890 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:01.768378 | orchestrator | 2025-04-13 00:44:01.768465 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-13 00:44:01.769244 | orchestrator | Sunday 13 April 2025 00:44:01 +0000 (0:00:00.190) 0:01:11.559 ********** 2025-04-13 00:44:01.910188 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:01.912706 | orchestrator | 2025-04-13 00:44:01.912833 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-13 00:44:01.912852 | orchestrator | Sunday 13 April 2025 00:44:01 +0000 (0:00:00.142) 0:01:11.701 ********** 2025-04-13 00:44:02.062462 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:02.064776 | orchestrator | 2025-04-13 00:44:02.064842 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-13 00:44:02.064858 | orchestrator | Sunday 13 April 2025 00:44:02 +0000 (0:00:00.150) 0:01:11.852 ********** 2025-04-13 00:44:02.433873 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:02.434453 | orchestrator | 2025-04-13 00:44:02.435422 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-13 00:44:02.436770 | orchestrator | Sunday 13 April 2025 00:44:02 +0000 (0:00:00.374) 0:01:12.227 ********** 2025-04-13 00:44:02.581636 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:02.582178 | orchestrator | 2025-04-13 00:44:02.582949 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-13 00:44:02.583362 | orchestrator | Sunday 13 April 2025 00:44:02 +0000 (0:00:00.148) 0:01:12.375 ********** 2025-04-13 00:44:02.766742 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:02.767837 | orchestrator | 2025-04-13 00:44:02.768659 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-13 00:44:02.769370 | orchestrator | Sunday 13 April 2025 00:44:02 +0000 (0:00:00.184) 0:01:12.559 ********** 2025-04-13 00:44:02.940467 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a', 'data_vg': 'ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'})  2025-04-13 00:44:02.940748 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598', 'data_vg': 'ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598'})  2025-04-13 00:44:02.941630 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:02.943152 | orchestrator | 2025-04-13 00:44:02.944696 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-13 00:44:02.944735 | orchestrator | Sunday 13 April 2025 00:44:02 +0000 (0:00:00.173) 0:01:12.732 ********** 2025-04-13 00:44:03.103547 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a', 'data_vg': 'ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'})  2025-04-13 00:44:03.104752 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598', 'data_vg': 'ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598'})  2025-04-13 00:44:03.107573 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:03.108325 | orchestrator | 2025-04-13 00:44:03.108358 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-13 00:44:03.109499 | orchestrator | Sunday 13 April 2025 00:44:03 +0000 (0:00:00.163) 0:01:12.896 ********** 2025-04-13 00:44:03.274749 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a', 'data_vg': 'ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'})  2025-04-13 00:44:03.276651 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598', 'data_vg': 'ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598'})  2025-04-13 00:44:03.276861 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:03.277691 | orchestrator | 2025-04-13 00:44:03.278577 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-13 00:44:03.280395 | orchestrator | Sunday 13 April 2025 00:44:03 +0000 (0:00:00.172) 0:01:13.068 ********** 2025-04-13 00:44:03.467649 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a', 'data_vg': 'ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'})  2025-04-13 00:44:03.468120 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598', 'data_vg': 'ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598'})  2025-04-13 00:44:03.469188 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:03.471813 | orchestrator | 2025-04-13 00:44:03.474423 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-13 00:44:03.474459 | orchestrator | Sunday 13 April 2025 00:44:03 +0000 (0:00:00.191) 0:01:13.260 ********** 2025-04-13 00:44:03.636118 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a', 'data_vg': 'ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'})  2025-04-13 00:44:03.636349 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598', 'data_vg': 'ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598'})  2025-04-13 00:44:03.636658 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:03.636940 | orchestrator | 2025-04-13 00:44:03.637187 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-13 00:44:03.637656 | orchestrator | Sunday 13 April 2025 00:44:03 +0000 (0:00:00.169) 0:01:13.430 ********** 2025-04-13 00:44:03.824628 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a', 'data_vg': 'ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'})  2025-04-13 00:44:03.825956 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598', 'data_vg': 'ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598'})  2025-04-13 00:44:03.826105 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:03.826137 | orchestrator | 2025-04-13 00:44:03.829186 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-13 00:44:03.829313 | orchestrator | Sunday 13 April 2025 00:44:03 +0000 (0:00:00.184) 0:01:13.615 ********** 2025-04-13 00:44:04.014819 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a', 'data_vg': 'ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'})  2025-04-13 00:44:04.015621 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598', 'data_vg': 'ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598'})  2025-04-13 00:44:04.018478 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:04.018623 | orchestrator | 2025-04-13 00:44:04.018643 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-13 00:44:04.018660 | orchestrator | Sunday 13 April 2025 00:44:04 +0000 (0:00:00.191) 0:01:13.806 ********** 2025-04-13 00:44:04.192374 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a', 'data_vg': 'ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'})  2025-04-13 00:44:04.192662 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598', 'data_vg': 'ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598'})  2025-04-13 00:44:04.195308 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:04.195526 | orchestrator | 2025-04-13 00:44:04.195555 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-13 00:44:04.195571 | orchestrator | Sunday 13 April 2025 00:44:04 +0000 (0:00:00.177) 0:01:13.984 ********** 2025-04-13 00:44:04.913541 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:44:04.916401 | orchestrator | 2025-04-13 00:44:04.916999 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-13 00:44:04.917041 | orchestrator | Sunday 13 April 2025 00:44:04 +0000 (0:00:00.720) 0:01:14.705 ********** 2025-04-13 00:44:05.439238 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:44:05.439414 | orchestrator | 2025-04-13 00:44:05.441210 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-13 00:44:05.442184 | orchestrator | Sunday 13 April 2025 00:44:05 +0000 (0:00:00.527) 0:01:15.232 ********** 2025-04-13 00:44:05.597357 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:44:05.598282 | orchestrator | 2025-04-13 00:44:05.599950 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-13 00:44:05.602510 | orchestrator | Sunday 13 April 2025 00:44:05 +0000 (0:00:00.157) 0:01:15.390 ********** 2025-04-13 00:44:05.797694 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a', 'vg_name': 'ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'}) 2025-04-13 00:44:05.799023 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598', 'vg_name': 'ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598'}) 2025-04-13 00:44:05.800181 | orchestrator | 2025-04-13 00:44:05.801055 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-13 00:44:05.802185 | orchestrator | Sunday 13 April 2025 00:44:05 +0000 (0:00:00.200) 0:01:15.590 ********** 2025-04-13 00:44:05.982480 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a', 'data_vg': 'ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'})  2025-04-13 00:44:05.989410 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598', 'data_vg': 'ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598'})  2025-04-13 00:44:06.148265 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:06.148387 | orchestrator | 2025-04-13 00:44:06.148406 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-13 00:44:06.148420 | orchestrator | Sunday 13 April 2025 00:44:05 +0000 (0:00:00.183) 0:01:15.774 ********** 2025-04-13 00:44:06.148449 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a', 'data_vg': 'ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'})  2025-04-13 00:44:06.149139 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598', 'data_vg': 'ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598'})  2025-04-13 00:44:06.149773 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:06.152674 | orchestrator | 2025-04-13 00:44:06.332334 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-13 00:44:06.332455 | orchestrator | Sunday 13 April 2025 00:44:06 +0000 (0:00:00.166) 0:01:15.941 ********** 2025-04-13 00:44:06.332491 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a', 'data_vg': 'ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'})  2025-04-13 00:44:06.334855 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598', 'data_vg': 'ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598'})  2025-04-13 00:44:06.335272 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:06.338593 | orchestrator | 2025-04-13 00:44:06.339930 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-13 00:44:06.958319 | orchestrator | Sunday 13 April 2025 00:44:06 +0000 (0:00:00.180) 0:01:16.122 ********** 2025-04-13 00:44:06.958462 | orchestrator | ok: [testbed-node-5] => { 2025-04-13 00:44:06.959332 | orchestrator |  "lvm_report": { 2025-04-13 00:44:06.961720 | orchestrator |  "lv": [ 2025-04-13 00:44:06.962197 | orchestrator |  { 2025-04-13 00:44:06.963310 | orchestrator |  "lv_name": "osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a", 2025-04-13 00:44:06.964479 | orchestrator |  "vg_name": "ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a" 2025-04-13 00:44:06.965233 | orchestrator |  }, 2025-04-13 00:44:06.966279 | orchestrator |  { 2025-04-13 00:44:06.967482 | orchestrator |  "lv_name": "osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598", 2025-04-13 00:44:06.968113 | orchestrator |  "vg_name": "ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598" 2025-04-13 00:44:06.969190 | orchestrator |  } 2025-04-13 00:44:06.969640 | orchestrator |  ], 2025-04-13 00:44:06.970398 | orchestrator |  "pv": [ 2025-04-13 00:44:06.972062 | orchestrator |  { 2025-04-13 00:44:06.973176 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-13 00:44:06.975033 | orchestrator |  "vg_name": "ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a" 2025-04-13 00:44:06.976264 | orchestrator |  }, 2025-04-13 00:44:06.977228 | orchestrator |  { 2025-04-13 00:44:06.978744 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-13 00:44:06.979565 | orchestrator |  "vg_name": "ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598" 2025-04-13 00:44:06.980669 | orchestrator |  } 2025-04-13 00:44:06.981660 | orchestrator |  ] 2025-04-13 00:44:06.982287 | orchestrator |  } 2025-04-13 00:44:06.983427 | orchestrator | } 2025-04-13 00:44:06.983895 | orchestrator | 2025-04-13 00:44:06.984422 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:44:06.984894 | orchestrator | 2025-04-13 00:44:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-13 00:44:06.985012 | orchestrator | 2025-04-13 00:44:06 | INFO  | Please wait and do not abort execution. 2025-04-13 00:44:06.986262 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-13 00:44:06.986699 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-13 00:44:06.987278 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-13 00:44:06.987915 | orchestrator | 2025-04-13 00:44:06.988248 | orchestrator | 2025-04-13 00:44:06.989414 | orchestrator | 2025-04-13 00:44:06.990319 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 00:44:06.990783 | orchestrator | Sunday 13 April 2025 00:44:06 +0000 (0:00:00.628) 0:01:16.750 ********** 2025-04-13 00:44:06.991757 | orchestrator | =============================================================================== 2025-04-13 00:44:06.992479 | orchestrator | Create block VGs -------------------------------------------------------- 5.82s 2025-04-13 00:44:06.993195 | orchestrator | Create block LVs -------------------------------------------------------- 4.10s 2025-04-13 00:44:06.994392 | orchestrator | Print LVM report data --------------------------------------------------- 2.17s 2025-04-13 00:44:06.995510 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.11s 2025-04-13 00:44:06.995911 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.71s 2025-04-13 00:44:06.996851 | orchestrator | Add known links to the list of available block devices ------------------ 1.66s 2025-04-13 00:44:06.997115 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.56s 2025-04-13 00:44:06.997877 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.53s 2025-04-13 00:44:06.999021 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.53s 2025-04-13 00:44:06.999260 | orchestrator | Add known partitions to the list of available block devices ------------- 1.46s 2025-04-13 00:44:07.000099 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.16s 2025-04-13 00:44:07.000797 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2025-04-13 00:44:07.001320 | orchestrator | Add known links to the list of available block devices ------------------ 0.81s 2025-04-13 00:44:07.001403 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.76s 2025-04-13 00:44:07.002212 | orchestrator | Get initial list of available block devices ----------------------------- 0.75s 2025-04-13 00:44:07.002384 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.74s 2025-04-13 00:44:07.002803 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.72s 2025-04-13 00:44:07.003121 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-04-13 00:44:07.003858 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-04-13 00:44:07.003998 | orchestrator | Combine JSON from _lvs_cmd_output/_pvs_cmd_output ----------------------- 0.66s 2025-04-13 00:44:08.883628 | orchestrator | 2025-04-13 00:44:08 | INFO  | Task b3ce7f88-58e7-4127-b76e-33d65b4a6344 (facts) was prepared for execution. 2025-04-13 00:44:12.029423 | orchestrator | 2025-04-13 00:44:08 | INFO  | It takes a moment until task b3ce7f88-58e7-4127-b76e-33d65b4a6344 (facts) has been started and output is visible here. 2025-04-13 00:44:12.029577 | orchestrator | 2025-04-13 00:44:12.032099 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-04-13 00:44:13.067338 | orchestrator | 2025-04-13 00:44:13.067477 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-13 00:44:13.067559 | orchestrator | Sunday 13 April 2025 00:44:12 +0000 (0:00:00.197) 0:00:00.197 ********** 2025-04-13 00:44:13.067618 | orchestrator | ok: [testbed-manager] 2025-04-13 00:44:13.067696 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:44:13.067720 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:44:13.068812 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:44:13.072207 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:44:13.073656 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:44:13.075400 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:44:13.075466 | orchestrator | 2025-04-13 00:44:13.077440 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-13 00:44:13.238121 | orchestrator | Sunday 13 April 2025 00:44:13 +0000 (0:00:01.036) 0:00:01.234 ********** 2025-04-13 00:44:13.238231 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:44:13.319133 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:44:13.400919 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:44:13.482828 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:44:13.559119 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:44:14.302791 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:44:14.303386 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:14.305591 | orchestrator | 2025-04-13 00:44:14.305928 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-13 00:44:14.307025 | orchestrator | 2025-04-13 00:44:14.308443 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-13 00:44:14.309694 | orchestrator | Sunday 13 April 2025 00:44:14 +0000 (0:00:01.237) 0:00:02.472 ********** 2025-04-13 00:44:18.800330 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:44:18.800943 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:44:18.802259 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:44:18.802836 | orchestrator | ok: [testbed-manager] 2025-04-13 00:44:18.804157 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:44:18.805329 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:44:18.806301 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:44:18.806696 | orchestrator | 2025-04-13 00:44:18.808152 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-13 00:44:18.809886 | orchestrator | 2025-04-13 00:44:18.810059 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-13 00:44:18.811077 | orchestrator | Sunday 13 April 2025 00:44:18 +0000 (0:00:04.499) 0:00:06.972 ********** 2025-04-13 00:44:19.139297 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:44:19.218738 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:44:19.291106 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:44:19.371076 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:44:19.450229 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:44:19.495708 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:44:19.497291 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:44:19.498535 | orchestrator | 2025-04-13 00:44:19.499652 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:44:19.500192 | orchestrator | 2025-04-13 00:44:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-13 00:44:19.500724 | orchestrator | 2025-04-13 00:44:19 | INFO  | Please wait and do not abort execution. 2025-04-13 00:44:19.502097 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:44:19.502449 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:44:19.503246 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:44:19.503751 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:44:19.504357 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:44:19.505439 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:44:19.505863 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:44:19.506321 | orchestrator | 2025-04-13 00:44:19.507283 | orchestrator | Sunday 13 April 2025 00:44:19 +0000 (0:00:00.695) 0:00:07.667 ********** 2025-04-13 00:44:19.507529 | orchestrator | =============================================================================== 2025-04-13 00:44:19.507621 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.50s 2025-04-13 00:44:19.507951 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.24s 2025-04-13 00:44:19.508199 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.04s 2025-04-13 00:44:19.509236 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.70s 2025-04-13 00:44:20.107260 | orchestrator | 2025-04-13 00:44:20.107858 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sun Apr 13 00:44:20 UTC 2025 2025-04-13 00:44:21.496955 | orchestrator | 2025-04-13 00:44:21.497147 | orchestrator | 2025-04-13 00:44:21 | INFO  | Collection nutshell is prepared for execution 2025-04-13 00:44:21.501414 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [0] - dotfiles 2025-04-13 00:44:21.501453 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [0] - homer 2025-04-13 00:44:21.503253 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [0] - netdata 2025-04-13 00:44:21.503277 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [0] - openstackclient 2025-04-13 00:44:21.503292 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [0] - phpmyadmin 2025-04-13 00:44:21.503306 | orchestrator | 2025-04-13 00:44:21 | INFO  | A [0] - common 2025-04-13 00:44:21.503326 | orchestrator | 2025-04-13 00:44:21 | INFO  | A [1] -- loadbalancer 2025-04-13 00:44:21.503494 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [2] --- opensearch 2025-04-13 00:44:21.503518 | orchestrator | 2025-04-13 00:44:21 | INFO  | A [2] --- mariadb-ng 2025-04-13 00:44:21.503532 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [3] ---- horizon 2025-04-13 00:44:21.503546 | orchestrator | 2025-04-13 00:44:21 | INFO  | A [3] ---- keystone 2025-04-13 00:44:21.503560 | orchestrator | 2025-04-13 00:44:21 | INFO  | A [4] ----- neutron 2025-04-13 00:44:21.503573 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [5] ------ wait-for-nova 2025-04-13 00:44:21.503588 | orchestrator | 2025-04-13 00:44:21 | INFO  | A [5] ------ octavia 2025-04-13 00:44:21.503607 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [4] ----- barbican 2025-04-13 00:44:21.503682 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [4] ----- designate 2025-04-13 00:44:21.503703 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [4] ----- ironic 2025-04-13 00:44:21.504093 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [4] ----- placement 2025-04-13 00:44:21.504117 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [4] ----- magnum 2025-04-13 00:44:21.504131 | orchestrator | 2025-04-13 00:44:21 | INFO  | A [1] -- openvswitch 2025-04-13 00:44:21.504145 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [2] --- ovn 2025-04-13 00:44:21.504164 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [1] -- memcached 2025-04-13 00:44:21.504236 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [1] -- redis 2025-04-13 00:44:21.504253 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [1] -- rabbitmq-ng 2025-04-13 00:44:21.504267 | orchestrator | 2025-04-13 00:44:21 | INFO  | A [0] - kubernetes 2025-04-13 00:44:21.504285 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [1] -- kubeconfig 2025-04-13 00:44:21.504353 | orchestrator | 2025-04-13 00:44:21 | INFO  | A [1] -- copy-kubeconfig 2025-04-13 00:44:21.504373 | orchestrator | 2025-04-13 00:44:21 | INFO  | A [0] - ceph 2025-04-13 00:44:21.505863 | orchestrator | 2025-04-13 00:44:21 | INFO  | A [1] -- ceph-pools 2025-04-13 00:44:21.505953 | orchestrator | 2025-04-13 00:44:21 | INFO  | A [2] --- copy-ceph-keys 2025-04-13 00:44:21.506079 | orchestrator | 2025-04-13 00:44:21 | INFO  | A [3] ---- cephclient 2025-04-13 00:44:21.506100 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-04-13 00:44:21.506114 | orchestrator | 2025-04-13 00:44:21 | INFO  | A [4] ----- wait-for-keystone 2025-04-13 00:44:21.506147 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [5] ------ kolla-ceph-rgw 2025-04-13 00:44:21.506244 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [5] ------ glance 2025-04-13 00:44:21.506261 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [5] ------ cinder 2025-04-13 00:44:21.506279 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [5] ------ nova 2025-04-13 00:44:21.671269 | orchestrator | 2025-04-13 00:44:21 | INFO  | A [4] ----- prometheus 2025-04-13 00:44:21.671388 | orchestrator | 2025-04-13 00:44:21 | INFO  | D [5] ------ grafana 2025-04-13 00:44:21.671424 | orchestrator | 2025-04-13 00:44:21 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-04-13 00:44:23.646536 | orchestrator | 2025-04-13 00:44:21 | INFO  | Tasks are running in the background 2025-04-13 00:44:23.646694 | orchestrator | 2025-04-13 00:44:23 | INFO  | No task IDs specified, wait for all currently running tasks 2025-04-13 00:44:25.750479 | orchestrator | 2025-04-13 00:44:25 | INFO  | Task e27bff1a-3ccd-494e-82ce-239e251aee74 is in state STARTED 2025-04-13 00:44:25.750830 | orchestrator | 2025-04-13 00:44:25 | INFO  | Task de5d300f-a2ba-43cd-abce-9beea5ed279f is in state STARTED 2025-04-13 00:44:25.751577 | orchestrator | 2025-04-13 00:44:25 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:44:25.754249 | orchestrator | 2025-04-13 00:44:25 | INFO  | Task 838e72a5-7491-4849-8b7e-3649059a30ea is in state STARTED 2025-04-13 00:44:25.754582 | orchestrator | 2025-04-13 00:44:25 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:44:25.755446 | orchestrator | 2025-04-13 00:44:25 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state STARTED 2025-04-13 00:44:28.800431 | orchestrator | 2025-04-13 00:44:25 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:44:28.800579 | orchestrator | 2025-04-13 00:44:28 | INFO  | Task e27bff1a-3ccd-494e-82ce-239e251aee74 is in state STARTED 2025-04-13 00:44:28.801079 | orchestrator | 2025-04-13 00:44:28 | INFO  | Task de5d300f-a2ba-43cd-abce-9beea5ed279f is in state STARTED 2025-04-13 00:44:28.801201 | orchestrator | 2025-04-13 00:44:28 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:44:28.801225 | orchestrator | 2025-04-13 00:44:28 | INFO  | Task 838e72a5-7491-4849-8b7e-3649059a30ea is in state STARTED 2025-04-13 00:44:28.801644 | orchestrator | 2025-04-13 00:44:28 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:44:28.802110 | orchestrator | 2025-04-13 00:44:28 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state STARTED 2025-04-13 00:44:28.802178 | orchestrator | 2025-04-13 00:44:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:44:31.845594 | orchestrator | 2025-04-13 00:44:31 | INFO  | Task e27bff1a-3ccd-494e-82ce-239e251aee74 is in state STARTED 2025-04-13 00:44:31.846086 | orchestrator | 2025-04-13 00:44:31 | INFO  | Task de5d300f-a2ba-43cd-abce-9beea5ed279f is in state STARTED 2025-04-13 00:44:31.846142 | orchestrator | 2025-04-13 00:44:31 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:44:31.847413 | orchestrator | 2025-04-13 00:44:31 | INFO  | Task 838e72a5-7491-4849-8b7e-3649059a30ea is in state STARTED 2025-04-13 00:44:34.891677 | orchestrator | 2025-04-13 00:44:31 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:44:34.891811 | orchestrator | 2025-04-13 00:44:31 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state STARTED 2025-04-13 00:44:34.891831 | orchestrator | 2025-04-13 00:44:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:44:34.891864 | orchestrator | 2025-04-13 00:44:34 | INFO  | Task e27bff1a-3ccd-494e-82ce-239e251aee74 is in state STARTED 2025-04-13 00:44:34.892935 | orchestrator | 2025-04-13 00:44:34 | INFO  | Task de5d300f-a2ba-43cd-abce-9beea5ed279f is in state STARTED 2025-04-13 00:44:34.893287 | orchestrator | 2025-04-13 00:44:34 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:44:34.895468 | orchestrator | 2025-04-13 00:44:34 | INFO  | Task 838e72a5-7491-4849-8b7e-3649059a30ea is in state STARTED 2025-04-13 00:44:34.896282 | orchestrator | 2025-04-13 00:44:34 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:44:34.897803 | orchestrator | 2025-04-13 00:44:34 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state STARTED 2025-04-13 00:44:37.957026 | orchestrator | 2025-04-13 00:44:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:44:37.957147 | orchestrator | 2025-04-13 00:44:37 | INFO  | Task e27bff1a-3ccd-494e-82ce-239e251aee74 is in state STARTED 2025-04-13 00:44:40.993412 | orchestrator | 2025-04-13 00:44:37 | INFO  | Task de5d300f-a2ba-43cd-abce-9beea5ed279f is in state STARTED 2025-04-13 00:44:40.993536 | orchestrator | 2025-04-13 00:44:37 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:44:40.993552 | orchestrator | 2025-04-13 00:44:37 | INFO  | Task 838e72a5-7491-4849-8b7e-3649059a30ea is in state STARTED 2025-04-13 00:44:40.993565 | orchestrator | 2025-04-13 00:44:37 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:44:40.993576 | orchestrator | 2025-04-13 00:44:37 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state STARTED 2025-04-13 00:44:40.993589 | orchestrator | 2025-04-13 00:44:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:44:40.993615 | orchestrator | 2025-04-13 00:44:40 | INFO  | Task e27bff1a-3ccd-494e-82ce-239e251aee74 is in state STARTED 2025-04-13 00:44:40.993684 | orchestrator | 2025-04-13 00:44:40 | INFO  | Task de5d300f-a2ba-43cd-abce-9beea5ed279f is in state STARTED 2025-04-13 00:44:40.996727 | orchestrator | 2025-04-13 00:44:40 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:44:44.059435 | orchestrator | 2025-04-13 00:44:40 | INFO  | Task 838e72a5-7491-4849-8b7e-3649059a30ea is in state STARTED 2025-04-13 00:44:44.059559 | orchestrator | 2025-04-13 00:44:40 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:44:44.059579 | orchestrator | 2025-04-13 00:44:40 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state STARTED 2025-04-13 00:44:44.059595 | orchestrator | 2025-04-13 00:44:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:44:44.059629 | orchestrator | 2025-04-13 00:44:44 | INFO  | Task e27bff1a-3ccd-494e-82ce-239e251aee74 is in state STARTED 2025-04-13 00:44:44.059948 | orchestrator | 2025-04-13 00:44:44 | INFO  | Task de5d300f-a2ba-43cd-abce-9beea5ed279f is in state STARTED 2025-04-13 00:44:44.060011 | orchestrator | 2025-04-13 00:44:44 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:44:44.062373 | orchestrator | 2025-04-13 00:44:44.062429 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-04-13 00:44:44.062448 | orchestrator | 2025-04-13 00:44:44.062465 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-04-13 00:44:44.062480 | orchestrator | Sunday 13 April 2025 00:44:29 +0000 (0:00:00.520) 0:00:00.520 ********** 2025-04-13 00:44:44.062494 | orchestrator | changed: [testbed-manager] 2025-04-13 00:44:44.062510 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:44:44.062525 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:44:44.062538 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:44:44.062578 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:44:44.062593 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:44:44.062607 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:44:44.062620 | orchestrator | 2025-04-13 00:44:44.062634 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-04-13 00:44:44.062655 | orchestrator | Sunday 13 April 2025 00:44:32 +0000 (0:00:03.118) 0:00:03.638 ********** 2025-04-13 00:44:44.062671 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-04-13 00:44:44.062685 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-04-13 00:44:44.062705 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-04-13 00:44:44.062719 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-04-13 00:44:44.062733 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-04-13 00:44:44.062746 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-04-13 00:44:44.062760 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-04-13 00:44:44.062774 | orchestrator | 2025-04-13 00:44:44.062788 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-04-13 00:44:44.062803 | orchestrator | Sunday 13 April 2025 00:44:34 +0000 (0:00:02.084) 0:00:05.722 ********** 2025-04-13 00:44:44.062820 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-13 00:44:33.804988', 'end': '2025-04-13 00:44:33.814022', 'delta': '0:00:00.009034', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-13 00:44:44.062844 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-13 00:44:33.791278', 'end': '2025-04-13 00:44:33.794295', 'delta': '0:00:00.003017', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-13 00:44:44.062860 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-13 00:44:34.019536', 'end': '2025-04-13 00:44:34.028069', 'delta': '0:00:00.008533', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-13 00:44:44.062903 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-13 00:44:34.229428', 'end': '2025-04-13 00:44:34.237464', 'delta': '0:00:00.008036', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-13 00:44:44.062929 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-13 00:44:34.367410', 'end': '2025-04-13 00:44:34.376707', 'delta': '0:00:00.009297', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-13 00:44:44.062944 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-13 00:44:34.470782', 'end': '2025-04-13 00:44:34.476548', 'delta': '0:00:00.005766', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-13 00:44:44.062964 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-13 00:44:34.622453', 'end': '2025-04-13 00:44:34.630676', 'delta': '0:00:00.008223', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-13 00:44:44.063007 | orchestrator | 2025-04-13 00:44:44.063023 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-04-13 00:44:44.063039 | orchestrator | Sunday 13 April 2025 00:44:37 +0000 (0:00:02.824) 0:00:08.547 ********** 2025-04-13 00:44:44.063056 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-04-13 00:44:44.063071 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-04-13 00:44:44.063087 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-04-13 00:44:44.063103 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-04-13 00:44:44.063119 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-04-13 00:44:44.063134 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-04-13 00:44:44.063150 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-04-13 00:44:44.063165 | orchestrator | 2025-04-13 00:44:44.063188 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:44:44.063204 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:44:44.063222 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:44:44.063238 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:44:44.063261 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:44:44.063298 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:44:44.063316 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:44:44.063332 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:44:44.063346 | orchestrator | 2025-04-13 00:44:44.063360 | orchestrator | Sunday 13 April 2025 00:44:40 +0000 (0:00:02.605) 0:00:11.152 ********** 2025-04-13 00:44:44.063374 | orchestrator | =============================================================================== 2025-04-13 00:44:44.063387 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.12s 2025-04-13 00:44:44.063401 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.82s 2025-04-13 00:44:44.063415 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.61s 2025-04-13 00:44:44.063429 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.08s 2025-04-13 00:44:44.063446 | orchestrator | 2025-04-13 00:44:44 | INFO  | Task 838e72a5-7491-4849-8b7e-3649059a30ea is in state SUCCESS 2025-04-13 00:44:44.067518 | orchestrator | 2025-04-13 00:44:44 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:44:44.067556 | orchestrator | 2025-04-13 00:44:44 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state STARTED 2025-04-13 00:44:44.068750 | orchestrator | 2025-04-13 00:44:44 | INFO  | Task 046a1c80-cde7-4b42-8b1f-e7ebacc931e9 is in state STARTED 2025-04-13 00:44:47.119060 | orchestrator | 2025-04-13 00:44:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:44:47.119173 | orchestrator | 2025-04-13 00:44:47 | INFO  | Task e27bff1a-3ccd-494e-82ce-239e251aee74 is in state STARTED 2025-04-13 00:44:47.120886 | orchestrator | 2025-04-13 00:44:47 | INFO  | Task de5d300f-a2ba-43cd-abce-9beea5ed279f is in state STARTED 2025-04-13 00:44:47.120913 | orchestrator | 2025-04-13 00:44:47 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:44:47.120927 | orchestrator | 2025-04-13 00:44:47 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:44:47.121585 | orchestrator | 2025-04-13 00:44:47 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state STARTED 2025-04-13 00:44:47.121664 | orchestrator | 2025-04-13 00:44:47 | INFO  | Task 046a1c80-cde7-4b42-8b1f-e7ebacc931e9 is in state STARTED 2025-04-13 00:44:50.187632 | orchestrator | 2025-04-13 00:44:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:44:50.187820 | orchestrator | 2025-04-13 00:44:50 | INFO  | Task e27bff1a-3ccd-494e-82ce-239e251aee74 is in state STARTED 2025-04-13 00:44:53.255864 | orchestrator | 2025-04-13 00:44:50 | INFO  | Task de5d300f-a2ba-43cd-abce-9beea5ed279f is in state STARTED 2025-04-13 00:44:53.256099 | orchestrator | 2025-04-13 00:44:50 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:44:53.256124 | orchestrator | 2025-04-13 00:44:50 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:44:53.256138 | orchestrator | 2025-04-13 00:44:50 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state STARTED 2025-04-13 00:44:53.256152 | orchestrator | 2025-04-13 00:44:50 | INFO  | Task 046a1c80-cde7-4b42-8b1f-e7ebacc931e9 is in state STARTED 2025-04-13 00:44:53.256166 | orchestrator | 2025-04-13 00:44:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:44:53.256198 | orchestrator | 2025-04-13 00:44:53 | INFO  | Task e27bff1a-3ccd-494e-82ce-239e251aee74 is in state STARTED 2025-04-13 00:44:53.257056 | orchestrator | 2025-04-13 00:44:53 | INFO  | Task de5d300f-a2ba-43cd-abce-9beea5ed279f is in state STARTED 2025-04-13 00:44:53.257087 | orchestrator | 2025-04-13 00:44:53 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:44:53.257524 | orchestrator | 2025-04-13 00:44:53 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:44:53.258338 | orchestrator | 2025-04-13 00:44:53 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state STARTED 2025-04-13 00:44:53.259654 | orchestrator | 2025-04-13 00:44:53 | INFO  | Task 046a1c80-cde7-4b42-8b1f-e7ebacc931e9 is in state STARTED 2025-04-13 00:44:56.320155 | orchestrator | 2025-04-13 00:44:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:44:56.320305 | orchestrator | 2025-04-13 00:44:56 | INFO  | Task e27bff1a-3ccd-494e-82ce-239e251aee74 is in state STARTED 2025-04-13 00:44:56.321783 | orchestrator | 2025-04-13 00:44:56 | INFO  | Task de5d300f-a2ba-43cd-abce-9beea5ed279f is in state STARTED 2025-04-13 00:44:56.323626 | orchestrator | 2025-04-13 00:44:56 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:44:56.327405 | orchestrator | 2025-04-13 00:44:56 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:44:56.333066 | orchestrator | 2025-04-13 00:44:56 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state STARTED 2025-04-13 00:44:59.380285 | orchestrator | 2025-04-13 00:44:56 | INFO  | Task 046a1c80-cde7-4b42-8b1f-e7ebacc931e9 is in state STARTED 2025-04-13 00:44:59.380439 | orchestrator | 2025-04-13 00:44:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:44:59.380528 | orchestrator | 2025-04-13 00:44:59 | INFO  | Task e27bff1a-3ccd-494e-82ce-239e251aee74 is in state STARTED 2025-04-13 00:44:59.383486 | orchestrator | 2025-04-13 00:44:59 | INFO  | Task de5d300f-a2ba-43cd-abce-9beea5ed279f is in state STARTED 2025-04-13 00:44:59.386402 | orchestrator | 2025-04-13 00:44:59 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:44:59.386863 | orchestrator | 2025-04-13 00:44:59 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:44:59.386890 | orchestrator | 2025-04-13 00:44:59 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state STARTED 2025-04-13 00:44:59.386911 | orchestrator | 2025-04-13 00:44:59 | INFO  | Task 046a1c80-cde7-4b42-8b1f-e7ebacc931e9 is in state STARTED 2025-04-13 00:44:59.388603 | orchestrator | 2025-04-13 00:44:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:45:02.451539 | orchestrator | 2025-04-13 00:45:02 | INFO  | Task e27bff1a-3ccd-494e-82ce-239e251aee74 is in state STARTED 2025-04-13 00:45:02.452236 | orchestrator | 2025-04-13 00:45:02 | INFO  | Task de5d300f-a2ba-43cd-abce-9beea5ed279f is in state STARTED 2025-04-13 00:45:02.452306 | orchestrator | 2025-04-13 00:45:02 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:45:02.455892 | orchestrator | 2025-04-13 00:45:02 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:45:02.459879 | orchestrator | 2025-04-13 00:45:02 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state STARTED 2025-04-13 00:45:02.460416 | orchestrator | 2025-04-13 00:45:02 | INFO  | Task 046a1c80-cde7-4b42-8b1f-e7ebacc931e9 is in state STARTED 2025-04-13 00:45:05.515222 | orchestrator | 2025-04-13 00:45:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:45:05.515360 | orchestrator | 2025-04-13 00:45:05 | INFO  | Task e27bff1a-3ccd-494e-82ce-239e251aee74 is in state STARTED 2025-04-13 00:45:05.518235 | orchestrator | 2025-04-13 00:45:05 | INFO  | Task de5d300f-a2ba-43cd-abce-9beea5ed279f is in state SUCCESS 2025-04-13 00:45:05.518288 | orchestrator | 2025-04-13 00:45:05 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:45:05.518743 | orchestrator | 2025-04-13 00:45:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:45:05.518771 | orchestrator | 2025-04-13 00:45:05 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:45:05.521421 | orchestrator | 2025-04-13 00:45:05 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state STARTED 2025-04-13 00:45:08.590313 | orchestrator | 2025-04-13 00:45:05 | INFO  | Task 046a1c80-cde7-4b42-8b1f-e7ebacc931e9 is in state STARTED 2025-04-13 00:45:08.590441 | orchestrator | 2025-04-13 00:45:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:45:08.590480 | orchestrator | 2025-04-13 00:45:08 | INFO  | Task e27bff1a-3ccd-494e-82ce-239e251aee74 is in state STARTED 2025-04-13 00:45:08.591249 | orchestrator | 2025-04-13 00:45:08 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:45:08.595887 | orchestrator | 2025-04-13 00:45:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:45:08.598109 | orchestrator | 2025-04-13 00:45:08 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:45:08.605112 | orchestrator | 2025-04-13 00:45:08 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state STARTED 2025-04-13 00:45:08.605727 | orchestrator | 2025-04-13 00:45:08 | INFO  | Task 046a1c80-cde7-4b42-8b1f-e7ebacc931e9 is in state STARTED 2025-04-13 00:45:11.662191 | orchestrator | 2025-04-13 00:45:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:45:11.662335 | orchestrator | 2025-04-13 00:45:11 | INFO  | Task e27bff1a-3ccd-494e-82ce-239e251aee74 is in state STARTED 2025-04-13 00:45:11.662444 | orchestrator | 2025-04-13 00:45:11 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:45:11.662465 | orchestrator | 2025-04-13 00:45:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:45:11.662485 | orchestrator | 2025-04-13 00:45:11 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:45:11.663649 | orchestrator | 2025-04-13 00:45:11 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state STARTED 2025-04-13 00:45:11.663873 | orchestrator | 2025-04-13 00:45:11 | INFO  | Task 046a1c80-cde7-4b42-8b1f-e7ebacc931e9 is in state STARTED 2025-04-13 00:45:11.663963 | orchestrator | 2025-04-13 00:45:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:45:14.723330 | orchestrator | 2025-04-13 00:45:14 | INFO  | Task e27bff1a-3ccd-494e-82ce-239e251aee74 is in state STARTED 2025-04-13 00:45:14.725581 | orchestrator | 2025-04-13 00:45:14 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:45:14.726008 | orchestrator | 2025-04-13 00:45:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:45:14.726093 | orchestrator | 2025-04-13 00:45:14 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:45:14.726114 | orchestrator | 2025-04-13 00:45:14 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state STARTED 2025-04-13 00:45:14.726765 | orchestrator | 2025-04-13 00:45:14 | INFO  | Task 046a1c80-cde7-4b42-8b1f-e7ebacc931e9 is in state STARTED 2025-04-13 00:45:17.804877 | orchestrator | 2025-04-13 00:45:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:45:17.805050 | orchestrator | 2025-04-13 00:45:17 | INFO  | Task e27bff1a-3ccd-494e-82ce-239e251aee74 is in state STARTED 2025-04-13 00:45:17.808123 | orchestrator | 2025-04-13 00:45:17 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:45:17.808443 | orchestrator | 2025-04-13 00:45:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:45:17.809100 | orchestrator | 2025-04-13 00:45:17 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:45:17.809468 | orchestrator | 2025-04-13 00:45:17 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state STARTED 2025-04-13 00:45:17.809986 | orchestrator | 2025-04-13 00:45:17 | INFO  | Task 046a1c80-cde7-4b42-8b1f-e7ebacc931e9 is in state STARTED 2025-04-13 00:45:20.844742 | orchestrator | 2025-04-13 00:45:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:45:20.844883 | orchestrator | 2025-04-13 00:45:20 | INFO  | Task e27bff1a-3ccd-494e-82ce-239e251aee74 is in state STARTED 2025-04-13 00:45:20.845318 | orchestrator | 2025-04-13 00:45:20 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:45:20.849079 | orchestrator | 2025-04-13 00:45:20 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:45:20.849144 | orchestrator | 2025-04-13 00:45:20 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:45:20.851973 | orchestrator | 2025-04-13 00:45:20 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state STARTED 2025-04-13 00:45:20.852275 | orchestrator | 2025-04-13 00:45:20 | INFO  | Task 046a1c80-cde7-4b42-8b1f-e7ebacc931e9 is in state STARTED 2025-04-13 00:45:23.900512 | orchestrator | 2025-04-13 00:45:20 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:45:23.900617 | orchestrator | 2025-04-13 00:45:23 | INFO  | Task e27bff1a-3ccd-494e-82ce-239e251aee74 is in state SUCCESS 2025-04-13 00:45:23.901466 | orchestrator | 2025-04-13 00:45:23 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:45:23.901504 | orchestrator | 2025-04-13 00:45:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:45:23.902703 | orchestrator | 2025-04-13 00:45:23 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:45:23.906645 | orchestrator | 2025-04-13 00:45:23 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state STARTED 2025-04-13 00:45:23.912330 | orchestrator | 2025-04-13 00:45:23 | INFO  | Task 046a1c80-cde7-4b42-8b1f-e7ebacc931e9 is in state STARTED 2025-04-13 00:45:26.966846 | orchestrator | 2025-04-13 00:45:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:45:26.967086 | orchestrator | 2025-04-13 00:45:26 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:45:26.968641 | orchestrator | 2025-04-13 00:45:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:45:26.968734 | orchestrator | 2025-04-13 00:45:26 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:45:26.969376 | orchestrator | 2025-04-13 00:45:26 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state STARTED 2025-04-13 00:45:26.969702 | orchestrator | 2025-04-13 00:45:26 | INFO  | Task 046a1c80-cde7-4b42-8b1f-e7ebacc931e9 is in state STARTED 2025-04-13 00:45:26.969785 | orchestrator | 2025-04-13 00:45:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:45:30.021588 | orchestrator | 2025-04-13 00:45:30 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:45:30.021777 | orchestrator | 2025-04-13 00:45:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:45:30.022834 | orchestrator | 2025-04-13 00:45:30 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:45:30.023485 | orchestrator | 2025-04-13 00:45:30 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state STARTED 2025-04-13 00:45:30.027365 | orchestrator | 2025-04-13 00:45:30 | INFO  | Task 046a1c80-cde7-4b42-8b1f-e7ebacc931e9 is in state STARTED 2025-04-13 00:45:30.027463 | orchestrator | 2025-04-13 00:45:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:45:33.061928 | orchestrator | 2025-04-13 00:45:33 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:45:33.062258 | orchestrator | 2025-04-13 00:45:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:45:33.064358 | orchestrator | 2025-04-13 00:45:33 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:45:33.065961 | orchestrator | 2025-04-13 00:45:33 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state STARTED 2025-04-13 00:45:33.069715 | orchestrator | 2025-04-13 00:45:33 | INFO  | Task 046a1c80-cde7-4b42-8b1f-e7ebacc931e9 is in state STARTED 2025-04-13 00:45:36.137686 | orchestrator | 2025-04-13 00:45:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:45:36.137811 | orchestrator | 2025-04-13 00:45:36.137826 | orchestrator | 2025-04-13 00:45:36.137835 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-04-13 00:45:36.137844 | orchestrator | 2025-04-13 00:45:36.137852 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-04-13 00:45:36.137861 | orchestrator | Sunday 13 April 2025 00:44:29 +0000 (0:00:00.555) 0:00:00.555 ********** 2025-04-13 00:45:36.137909 | orchestrator | ok: [testbed-manager] => { 2025-04-13 00:45:36.137919 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-04-13 00:45:36.137930 | orchestrator | } 2025-04-13 00:45:36.137939 | orchestrator | 2025-04-13 00:45:36.137948 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-04-13 00:45:36.137957 | orchestrator | Sunday 13 April 2025 00:44:30 +0000 (0:00:00.563) 0:00:01.119 ********** 2025-04-13 00:45:36.137967 | orchestrator | ok: [testbed-manager] 2025-04-13 00:45:36.138008 | orchestrator | 2025-04-13 00:45:36.138084 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-04-13 00:45:36.138095 | orchestrator | Sunday 13 April 2025 00:44:31 +0000 (0:00:01.015) 0:00:02.134 ********** 2025-04-13 00:45:36.138104 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-04-13 00:45:36.138111 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-04-13 00:45:36.138119 | orchestrator | 2025-04-13 00:45:36.138127 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-04-13 00:45:36.138134 | orchestrator | Sunday 13 April 2025 00:44:32 +0000 (0:00:01.038) 0:00:03.172 ********** 2025-04-13 00:45:36.138164 | orchestrator | changed: [testbed-manager] 2025-04-13 00:45:36.138173 | orchestrator | 2025-04-13 00:45:36.138182 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-04-13 00:45:36.138191 | orchestrator | Sunday 13 April 2025 00:44:35 +0000 (0:00:02.812) 0:00:05.985 ********** 2025-04-13 00:45:36.138199 | orchestrator | changed: [testbed-manager] 2025-04-13 00:45:36.138208 | orchestrator | 2025-04-13 00:45:36.138216 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-04-13 00:45:36.138224 | orchestrator | Sunday 13 April 2025 00:44:36 +0000 (0:00:01.619) 0:00:07.605 ********** 2025-04-13 00:45:36.138231 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-04-13 00:45:36.138239 | orchestrator | ok: [testbed-manager] 2025-04-13 00:45:36.138247 | orchestrator | 2025-04-13 00:45:36.138255 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-04-13 00:45:36.138264 | orchestrator | Sunday 13 April 2025 00:45:01 +0000 (0:00:24.279) 0:00:31.885 ********** 2025-04-13 00:45:36.138273 | orchestrator | changed: [testbed-manager] 2025-04-13 00:45:36.138281 | orchestrator | 2025-04-13 00:45:36.138290 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:45:36.138299 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:45:36.138310 | orchestrator | 2025-04-13 00:45:36.138319 | orchestrator | Sunday 13 April 2025 00:45:03 +0000 (0:00:02.516) 0:00:34.401 ********** 2025-04-13 00:45:36.138327 | orchestrator | =============================================================================== 2025-04-13 00:45:36.138334 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.28s 2025-04-13 00:45:36.138342 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.81s 2025-04-13 00:45:36.138349 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.52s 2025-04-13 00:45:36.138362 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.62s 2025-04-13 00:45:36.138371 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.04s 2025-04-13 00:45:36.138380 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.02s 2025-04-13 00:45:36.138387 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.56s 2025-04-13 00:45:36.138395 | orchestrator | 2025-04-13 00:45:36.138403 | orchestrator | 2025-04-13 00:45:36.138411 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-04-13 00:45:36.138419 | orchestrator | 2025-04-13 00:45:36.138427 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-04-13 00:45:36.138435 | orchestrator | Sunday 13 April 2025 00:44:29 +0000 (0:00:00.463) 0:00:00.463 ********** 2025-04-13 00:45:36.138444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-04-13 00:45:36.138453 | orchestrator | 2025-04-13 00:45:36.138461 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-04-13 00:45:36.138469 | orchestrator | Sunday 13 April 2025 00:44:29 +0000 (0:00:00.220) 0:00:00.684 ********** 2025-04-13 00:45:36.138476 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-04-13 00:45:36.138485 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-04-13 00:45:36.138494 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-04-13 00:45:36.138503 | orchestrator | 2025-04-13 00:45:36.138511 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-04-13 00:45:36.138519 | orchestrator | Sunday 13 April 2025 00:44:30 +0000 (0:00:01.191) 0:00:01.875 ********** 2025-04-13 00:45:36.138527 | orchestrator | changed: [testbed-manager] 2025-04-13 00:45:36.138536 | orchestrator | 2025-04-13 00:45:36.138544 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-04-13 00:45:36.138558 | orchestrator | Sunday 13 April 2025 00:44:32 +0000 (0:00:01.256) 0:00:03.131 ********** 2025-04-13 00:45:36.138566 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-04-13 00:45:36.138575 | orchestrator | ok: [testbed-manager] 2025-04-13 00:45:36.138583 | orchestrator | 2025-04-13 00:45:36.138602 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-04-13 00:45:36.138614 | orchestrator | Sunday 13 April 2025 00:45:12 +0000 (0:00:40.243) 0:00:43.375 ********** 2025-04-13 00:45:36.138622 | orchestrator | changed: [testbed-manager] 2025-04-13 00:45:36.138630 | orchestrator | 2025-04-13 00:45:36.138637 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-04-13 00:45:36.138645 | orchestrator | Sunday 13 April 2025 00:45:14 +0000 (0:00:02.336) 0:00:45.712 ********** 2025-04-13 00:45:36.138653 | orchestrator | ok: [testbed-manager] 2025-04-13 00:45:36.138662 | orchestrator | 2025-04-13 00:45:36.138669 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-04-13 00:45:36.138676 | orchestrator | Sunday 13 April 2025 00:45:16 +0000 (0:00:01.502) 0:00:47.214 ********** 2025-04-13 00:45:36.138684 | orchestrator | changed: [testbed-manager] 2025-04-13 00:45:36.138692 | orchestrator | 2025-04-13 00:45:36.138701 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-04-13 00:45:36.138708 | orchestrator | Sunday 13 April 2025 00:45:18 +0000 (0:00:02.811) 0:00:50.026 ********** 2025-04-13 00:45:36.138715 | orchestrator | changed: [testbed-manager] 2025-04-13 00:45:36.138721 | orchestrator | 2025-04-13 00:45:36.138728 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-04-13 00:45:36.138735 | orchestrator | Sunday 13 April 2025 00:45:20 +0000 (0:00:01.122) 0:00:51.148 ********** 2025-04-13 00:45:36.138742 | orchestrator | changed: [testbed-manager] 2025-04-13 00:45:36.138749 | orchestrator | 2025-04-13 00:45:36.138756 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-04-13 00:45:36.138763 | orchestrator | Sunday 13 April 2025 00:45:20 +0000 (0:00:00.617) 0:00:51.765 ********** 2025-04-13 00:45:36.138772 | orchestrator | ok: [testbed-manager] 2025-04-13 00:45:36.138780 | orchestrator | 2025-04-13 00:45:36.138789 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:45:36.138797 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:45:36.138805 | orchestrator | 2025-04-13 00:45:36.138813 | orchestrator | Sunday 13 April 2025 00:45:21 +0000 (0:00:00.424) 0:00:52.189 ********** 2025-04-13 00:45:36.138821 | orchestrator | =============================================================================== 2025-04-13 00:45:36.138828 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 40.24s 2025-04-13 00:45:36.138835 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.81s 2025-04-13 00:45:36.138844 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.34s 2025-04-13 00:45:36.138855 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.50s 2025-04-13 00:45:36.138892 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.26s 2025-04-13 00:45:36.138900 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.19s 2025-04-13 00:45:36.138907 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.12s 2025-04-13 00:45:36.138915 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.62s 2025-04-13 00:45:36.138923 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.42s 2025-04-13 00:45:36.138931 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.22s 2025-04-13 00:45:36.138938 | orchestrator | 2025-04-13 00:45:36.138945 | orchestrator | 2025-04-13 00:45:36.138953 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 00:45:36.138961 | orchestrator | 2025-04-13 00:45:36.138975 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 00:45:36.138982 | orchestrator | Sunday 13 April 2025 00:44:29 +0000 (0:00:00.442) 0:00:00.442 ********** 2025-04-13 00:45:36.138990 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-04-13 00:45:36.138999 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-04-13 00:45:36.139007 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-04-13 00:45:36.139014 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-04-13 00:45:36.139021 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-04-13 00:45:36.139030 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-04-13 00:45:36.139038 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-04-13 00:45:36.139046 | orchestrator | 2025-04-13 00:45:36.139053 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-04-13 00:45:36.139061 | orchestrator | 2025-04-13 00:45:36.139069 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-04-13 00:45:36.139077 | orchestrator | Sunday 13 April 2025 00:44:30 +0000 (0:00:01.816) 0:00:02.258 ********** 2025-04-13 00:45:36.139093 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:45:36.139103 | orchestrator | 2025-04-13 00:45:36.139112 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-04-13 00:45:36.139119 | orchestrator | Sunday 13 April 2025 00:44:32 +0000 (0:00:01.305) 0:00:03.564 ********** 2025-04-13 00:45:36.139126 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:45:36.139134 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:45:36.139143 | orchestrator | ok: [testbed-manager] 2025-04-13 00:45:36.139151 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:45:36.139158 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:45:36.139165 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:45:36.139172 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:45:36.139178 | orchestrator | 2025-04-13 00:45:36.139186 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-04-13 00:45:36.139201 | orchestrator | Sunday 13 April 2025 00:44:34 +0000 (0:00:02.617) 0:00:06.182 ********** 2025-04-13 00:45:36.139210 | orchestrator | ok: [testbed-manager] 2025-04-13 00:45:36.139218 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:45:36.139226 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:45:36.139234 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:45:36.139242 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:45:36.139249 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:45:36.139257 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:45:36.139267 | orchestrator | 2025-04-13 00:45:36.139276 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-04-13 00:45:36.139283 | orchestrator | Sunday 13 April 2025 00:44:38 +0000 (0:00:03.688) 0:00:09.870 ********** 2025-04-13 00:45:36.139290 | orchestrator | changed: [testbed-manager] 2025-04-13 00:45:36.139298 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:45:36.139306 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:45:36.139314 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:45:36.139321 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:45:36.139328 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:45:36.139336 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:45:36.139345 | orchestrator | 2025-04-13 00:45:36.139352 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-04-13 00:45:36.139359 | orchestrator | Sunday 13 April 2025 00:44:40 +0000 (0:00:02.372) 0:00:12.243 ********** 2025-04-13 00:45:36.139366 | orchestrator | changed: [testbed-manager] 2025-04-13 00:45:36.139374 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:45:36.139382 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:45:36.139395 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:45:36.139402 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:45:36.139409 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:45:36.139418 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:45:36.139426 | orchestrator | 2025-04-13 00:45:36.139434 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-04-13 00:45:36.139441 | orchestrator | Sunday 13 April 2025 00:44:50 +0000 (0:00:09.677) 0:00:21.921 ********** 2025-04-13 00:45:36.139449 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:45:36.139457 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:45:36.139463 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:45:36.139471 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:45:36.139479 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:45:36.139487 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:45:36.139494 | orchestrator | changed: [testbed-manager] 2025-04-13 00:45:36.139501 | orchestrator | 2025-04-13 00:45:36.139509 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-04-13 00:45:36.139518 | orchestrator | Sunday 13 April 2025 00:45:08 +0000 (0:00:18.348) 0:00:40.269 ********** 2025-04-13 00:45:36.139527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:45:36.139537 | orchestrator | 2025-04-13 00:45:36.139544 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-04-13 00:45:36.139551 | orchestrator | Sunday 13 April 2025 00:45:11 +0000 (0:00:02.494) 0:00:42.764 ********** 2025-04-13 00:45:36.139558 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-04-13 00:45:36.139565 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-04-13 00:45:36.139574 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-04-13 00:45:36.139583 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-04-13 00:45:36.139591 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-04-13 00:45:36.139600 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-04-13 00:45:36.139608 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-04-13 00:45:36.139616 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-04-13 00:45:36.139623 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-04-13 00:45:36.139631 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-04-13 00:45:36.139640 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-04-13 00:45:36.139647 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-04-13 00:45:36.139655 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-04-13 00:45:36.139662 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-04-13 00:45:36.139690 | orchestrator | 2025-04-13 00:45:36.139697 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-04-13 00:45:36.139706 | orchestrator | Sunday 13 April 2025 00:45:18 +0000 (0:00:06.822) 0:00:49.586 ********** 2025-04-13 00:45:36.139714 | orchestrator | ok: [testbed-manager] 2025-04-13 00:45:36.139722 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:45:36.139729 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:45:36.139736 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:45:36.139743 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:45:36.139750 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:45:36.139757 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:45:36.139763 | orchestrator | 2025-04-13 00:45:36.139772 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-04-13 00:45:36.139780 | orchestrator | Sunday 13 April 2025 00:45:20 +0000 (0:00:02.334) 0:00:51.920 ********** 2025-04-13 00:45:36.139788 | orchestrator | changed: [testbed-manager] 2025-04-13 00:45:36.139796 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:45:36.139805 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:45:36.139817 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:45:36.139824 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:45:36.139832 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:45:36.139840 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:45:36.139848 | orchestrator | 2025-04-13 00:45:36.139855 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-04-13 00:45:36.139882 | orchestrator | Sunday 13 April 2025 00:45:22 +0000 (0:00:02.225) 0:00:54.146 ********** 2025-04-13 00:45:36.139891 | orchestrator | ok: [testbed-manager] 2025-04-13 00:45:36.139899 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:45:36.139906 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:45:36.139914 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:45:36.139928 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:45:36.139935 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:45:36.139942 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:45:36.139949 | orchestrator | 2025-04-13 00:45:36.139956 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-04-13 00:45:36.139963 | orchestrator | Sunday 13 April 2025 00:45:24 +0000 (0:00:01.875) 0:00:56.021 ********** 2025-04-13 00:45:36.139971 | orchestrator | ok: [testbed-manager] 2025-04-13 00:45:36.139979 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:45:36.139987 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:45:36.139995 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:45:36.140004 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:45:36.140012 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:45:36.140019 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:45:36.140027 | orchestrator | 2025-04-13 00:45:36.140034 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-04-13 00:45:36.140041 | orchestrator | Sunday 13 April 2025 00:45:26 +0000 (0:00:01.902) 0:00:57.923 ********** 2025-04-13 00:45:36.140048 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-04-13 00:45:36.140059 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:45:36.140068 | orchestrator | 2025-04-13 00:45:36.140076 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-04-13 00:45:36.140085 | orchestrator | Sunday 13 April 2025 00:45:28 +0000 (0:00:01.634) 0:00:59.558 ********** 2025-04-13 00:45:36.140093 | orchestrator | changed: [testbed-manager] 2025-04-13 00:45:36.140100 | orchestrator | 2025-04-13 00:45:36.140108 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-04-13 00:45:36.140116 | orchestrator | Sunday 13 April 2025 00:45:30 +0000 (0:00:01.906) 0:01:01.464 ********** 2025-04-13 00:45:36.140123 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:45:36.140131 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:45:36.140144 | orchestrator | changed: [testbed-manager] 2025-04-13 00:45:36.140152 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:45:36.140160 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:45:36.140167 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:45:36.140175 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:45:36.140184 | orchestrator | 2025-04-13 00:45:36.140192 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:45:36.140199 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:45:36.140207 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:45:36.140215 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:45:36.140227 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:45:36.140240 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:45:36.140247 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:45:36.140254 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:45:36.140263 | orchestrator | 2025-04-13 00:45:36.140271 | orchestrator | Sunday 13 April 2025 00:45:33 +0000 (0:00:03.354) 0:01:04.819 ********** 2025-04-13 00:45:36.140278 | orchestrator | =============================================================================== 2025-04-13 00:45:36.140285 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 18.35s 2025-04-13 00:45:36.140293 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.68s 2025-04-13 00:45:36.140301 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.82s 2025-04-13 00:45:36.140309 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.69s 2025-04-13 00:45:36.140316 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.35s 2025-04-13 00:45:36.140323 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.62s 2025-04-13 00:45:36.140330 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.49s 2025-04-13 00:45:36.140337 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.37s 2025-04-13 00:45:36.140344 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 2.33s 2025-04-13 00:45:36.140352 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.23s 2025-04-13 00:45:36.140360 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.91s 2025-04-13 00:45:36.140369 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.90s 2025-04-13 00:45:36.140377 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.88s 2025-04-13 00:45:36.140385 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.82s 2025-04-13 00:45:36.140398 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.63s 2025-04-13 00:45:39.181769 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.31s 2025-04-13 00:45:39.182100 | orchestrator | 2025-04-13 00:45:36 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:45:39.182138 | orchestrator | 2025-04-13 00:45:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:45:39.182154 | orchestrator | 2025-04-13 00:45:36 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:45:39.182169 | orchestrator | 2025-04-13 00:45:36 | INFO  | Task 4d371203-dff3-40b3-9368-ef2c2f0abb96 is in state SUCCESS 2025-04-13 00:45:39.182186 | orchestrator | 2025-04-13 00:45:36 | INFO  | Task 046a1c80-cde7-4b42-8b1f-e7ebacc931e9 is in state STARTED 2025-04-13 00:45:39.182204 | orchestrator | 2025-04-13 00:45:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:45:39.182255 | orchestrator | 2025-04-13 00:45:39 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:45:39.192809 | orchestrator | 2025-04-13 00:45:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:45:42.237929 | orchestrator | 2025-04-13 00:45:39 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:45:42.238143 | orchestrator | 2025-04-13 00:45:39 | INFO  | Task 046a1c80-cde7-4b42-8b1f-e7ebacc931e9 is in state STARTED 2025-04-13 00:45:42.238209 | orchestrator | 2025-04-13 00:45:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:45:42.238244 | orchestrator | 2025-04-13 00:45:42 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:45:42.242302 | orchestrator | 2025-04-13 00:45:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:45:42.268177 | orchestrator | 2025-04-13 00:45:42 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:45:45.314463 | orchestrator | 2025-04-13 00:45:42 | INFO  | Task 046a1c80-cde7-4b42-8b1f-e7ebacc931e9 is in state STARTED 2025-04-13 00:45:45.314585 | orchestrator | 2025-04-13 00:45:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:45:45.314624 | orchestrator | 2025-04-13 00:45:45 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:45:45.315231 | orchestrator | 2025-04-13 00:45:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:45:45.316060 | orchestrator | 2025-04-13 00:45:45 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:45:45.316938 | orchestrator | 2025-04-13 00:45:45 | INFO  | Task 046a1c80-cde7-4b42-8b1f-e7ebacc931e9 is in state STARTED 2025-04-13 00:45:48.366191 | orchestrator | 2025-04-13 00:45:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:45:48.366345 | orchestrator | 2025-04-13 00:45:48 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:45:48.368565 | orchestrator | 2025-04-13 00:45:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:45:48.368629 | orchestrator | 2025-04-13 00:45:48 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:45:51.409045 | orchestrator | 2025-04-13 00:45:48 | INFO  | Task 046a1c80-cde7-4b42-8b1f-e7ebacc931e9 is in state SUCCESS 2025-04-13 00:45:51.409171 | orchestrator | 2025-04-13 00:45:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:45:51.409210 | orchestrator | 2025-04-13 00:45:51 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:45:51.412999 | orchestrator | 2025-04-13 00:45:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:45:51.413149 | orchestrator | 2025-04-13 00:45:51 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:45:54.486938 | orchestrator | 2025-04-13 00:45:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:45:54.487048 | orchestrator | 2025-04-13 00:45:54 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:45:54.489619 | orchestrator | 2025-04-13 00:45:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:45:54.496205 | orchestrator | 2025-04-13 00:45:54 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:45:57.535142 | orchestrator | 2025-04-13 00:45:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:45:57.535285 | orchestrator | 2025-04-13 00:45:57 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:45:57.539529 | orchestrator | 2025-04-13 00:45:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:45:57.539622 | orchestrator | 2025-04-13 00:45:57 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:45:57.539858 | orchestrator | 2025-04-13 00:45:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:46:00.586331 | orchestrator | 2025-04-13 00:46:00 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:46:00.586548 | orchestrator | 2025-04-13 00:46:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:46:00.586597 | orchestrator | 2025-04-13 00:46:00 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:46:03.639916 | orchestrator | 2025-04-13 00:46:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:46:03.640032 | orchestrator | 2025-04-13 00:46:03 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:46:03.640867 | orchestrator | 2025-04-13 00:46:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:46:03.643171 | orchestrator | 2025-04-13 00:46:03 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:46:06.692978 | orchestrator | 2025-04-13 00:46:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:46:06.693111 | orchestrator | 2025-04-13 00:46:06 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:46:06.699037 | orchestrator | 2025-04-13 00:46:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:46:09.761330 | orchestrator | 2025-04-13 00:46:06 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:46:09.761481 | orchestrator | 2025-04-13 00:46:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:46:09.761533 | orchestrator | 2025-04-13 00:46:09 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:46:12.812268 | orchestrator | 2025-04-13 00:46:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:46:12.812398 | orchestrator | 2025-04-13 00:46:09 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:46:12.812419 | orchestrator | 2025-04-13 00:46:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:46:12.812452 | orchestrator | 2025-04-13 00:46:12 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:46:12.814360 | orchestrator | 2025-04-13 00:46:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:46:12.815259 | orchestrator | 2025-04-13 00:46:12 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:46:12.816083 | orchestrator | 2025-04-13 00:46:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:46:15.872495 | orchestrator | 2025-04-13 00:46:15 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:46:15.874115 | orchestrator | 2025-04-13 00:46:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:46:15.875651 | orchestrator | 2025-04-13 00:46:15 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:46:18.920466 | orchestrator | 2025-04-13 00:46:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:46:18.920636 | orchestrator | 2025-04-13 00:46:18 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:46:18.920824 | orchestrator | 2025-04-13 00:46:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:46:18.921612 | orchestrator | 2025-04-13 00:46:18 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:46:21.970369 | orchestrator | 2025-04-13 00:46:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:46:21.970509 | orchestrator | 2025-04-13 00:46:21 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:46:21.972568 | orchestrator | 2025-04-13 00:46:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:46:21.973451 | orchestrator | 2025-04-13 00:46:21 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:46:21.973572 | orchestrator | 2025-04-13 00:46:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:46:25.009561 | orchestrator | 2025-04-13 00:46:25 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:46:25.014470 | orchestrator | 2025-04-13 00:46:25 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:46:25.015176 | orchestrator | 2025-04-13 00:46:25 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:46:28.068593 | orchestrator | 2025-04-13 00:46:25 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:46:28.068788 | orchestrator | 2025-04-13 00:46:28 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:46:28.069837 | orchestrator | 2025-04-13 00:46:28 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:46:28.073228 | orchestrator | 2025-04-13 00:46:28 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:46:31.111894 | orchestrator | 2025-04-13 00:46:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:46:31.112047 | orchestrator | 2025-04-13 00:46:31 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:46:31.112137 | orchestrator | 2025-04-13 00:46:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:46:31.114981 | orchestrator | 2025-04-13 00:46:31 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:46:31.115061 | orchestrator | 2025-04-13 00:46:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:46:34.163283 | orchestrator | 2025-04-13 00:46:34 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:46:34.163956 | orchestrator | 2025-04-13 00:46:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:46:34.164558 | orchestrator | 2025-04-13 00:46:34 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:46:37.224946 | orchestrator | 2025-04-13 00:46:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:46:37.225108 | orchestrator | 2025-04-13 00:46:37 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state STARTED 2025-04-13 00:46:37.225193 | orchestrator | 2025-04-13 00:46:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:46:37.226417 | orchestrator | 2025-04-13 00:46:37 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:46:40.295936 | orchestrator | 2025-04-13 00:46:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:46:40.296081 | orchestrator | 2025-04-13 00:46:40.296104 | orchestrator | 2025-04-13 00:46:40.296119 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-04-13 00:46:40.296133 | orchestrator | 2025-04-13 00:46:40.296148 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-04-13 00:46:40.296162 | orchestrator | Sunday 13 April 2025 00:44:45 +0000 (0:00:00.239) 0:00:00.239 ********** 2025-04-13 00:46:40.296176 | orchestrator | ok: [testbed-manager] 2025-04-13 00:46:40.296192 | orchestrator | 2025-04-13 00:46:40.296206 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-04-13 00:46:40.296220 | orchestrator | Sunday 13 April 2025 00:44:46 +0000 (0:00:00.940) 0:00:01.179 ********** 2025-04-13 00:46:40.296235 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-04-13 00:46:40.296275 | orchestrator | 2025-04-13 00:46:40.296290 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-04-13 00:46:40.296304 | orchestrator | Sunday 13 April 2025 00:44:46 +0000 (0:00:00.798) 0:00:01.978 ********** 2025-04-13 00:46:40.296318 | orchestrator | changed: [testbed-manager] 2025-04-13 00:46:40.296336 | orchestrator | 2025-04-13 00:46:40.296351 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-04-13 00:46:40.296367 | orchestrator | Sunday 13 April 2025 00:44:48 +0000 (0:00:01.868) 0:00:03.847 ********** 2025-04-13 00:46:40.296383 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-04-13 00:46:40.296398 | orchestrator | ok: [testbed-manager] 2025-04-13 00:46:40.296413 | orchestrator | 2025-04-13 00:46:40.296428 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-04-13 00:46:40.296443 | orchestrator | Sunday 13 April 2025 00:45:41 +0000 (0:00:53.057) 0:00:56.904 ********** 2025-04-13 00:46:40.296458 | orchestrator | changed: [testbed-manager] 2025-04-13 00:46:40.296475 | orchestrator | 2025-04-13 00:46:40.296502 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:46:40.296518 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:46:40.296535 | orchestrator | 2025-04-13 00:46:40.296551 | orchestrator | Sunday 13 April 2025 00:45:45 +0000 (0:00:03.472) 0:01:00.376 ********** 2025-04-13 00:46:40.296566 | orchestrator | =============================================================================== 2025-04-13 00:46:40.296580 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 53.06s 2025-04-13 00:46:40.296594 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.47s 2025-04-13 00:46:40.296608 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.87s 2025-04-13 00:46:40.296621 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.94s 2025-04-13 00:46:40.296635 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.80s 2025-04-13 00:46:40.296649 | orchestrator | 2025-04-13 00:46:40.296663 | orchestrator | 2025-04-13 00:46:40.296677 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-04-13 00:46:40.296710 | orchestrator | 2025-04-13 00:46:40.296725 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-04-13 00:46:40.296738 | orchestrator | Sunday 13 April 2025 00:44:25 +0000 (0:00:00.343) 0:00:00.343 ********** 2025-04-13 00:46:40.296753 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:46:40.296767 | orchestrator | 2025-04-13 00:46:40.296781 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-04-13 00:46:40.296795 | orchestrator | Sunday 13 April 2025 00:44:27 +0000 (0:00:01.848) 0:00:02.191 ********** 2025-04-13 00:46:40.296808 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-13 00:46:40.296822 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-13 00:46:40.296835 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-13 00:46:40.296849 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-13 00:46:40.296863 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-13 00:46:40.296877 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-13 00:46:40.296891 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-13 00:46:40.296905 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-13 00:46:40.296918 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-13 00:46:40.296941 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-13 00:46:40.296955 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-13 00:46:40.296969 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-13 00:46:40.296983 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-13 00:46:40.296996 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-13 00:46:40.297010 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-13 00:46:40.297029 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-13 00:46:40.297043 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-13 00:46:40.297069 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-13 00:46:40.297084 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-13 00:46:40.297102 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-13 00:46:40.297117 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-13 00:46:40.297131 | orchestrator | 2025-04-13 00:46:40.297145 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-04-13 00:46:40.297159 | orchestrator | Sunday 13 April 2025 00:44:30 +0000 (0:00:03.746) 0:00:05.937 ********** 2025-04-13 00:46:40.297172 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:46:40.297193 | orchestrator | 2025-04-13 00:46:40.297207 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-04-13 00:46:40.297220 | orchestrator | Sunday 13 April 2025 00:44:32 +0000 (0:00:01.505) 0:00:07.443 ********** 2025-04-13 00:46:40.297237 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.297256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.297271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.297285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.297307 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.297322 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.297343 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.297358 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.297373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.297387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.297402 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.297423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.297445 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.297464 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.297480 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.297495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.297509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.297524 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.297544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.297558 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.297573 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.297586 | orchestrator | 2025-04-13 00:46:40.297600 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-04-13 00:46:40.297614 | orchestrator | Sunday 13 April 2025 00:44:37 +0000 (0:00:04.978) 0:00:12.422 ********** 2025-04-13 00:46:40.297635 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-13 00:46:40.297650 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.297670 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.297685 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:46:40.297775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-13 00:46:40.297797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.297812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.297827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-13 00:46:40.297849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.297862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.297875 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:46:40.297888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-13 00:46:40.297901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.297920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.297932 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:46:40.297945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-13 00:46:40.297958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.297971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.297984 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:46:40.297996 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:46:40.298084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-13 00:46:40.298104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.298117 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.298137 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:46:40.298150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-13 00:46:40.298163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.298176 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.298189 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:46:40.298201 | orchestrator | 2025-04-13 00:46:40.298214 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-04-13 00:46:40.298226 | orchestrator | Sunday 13 April 2025 00:44:39 +0000 (0:00:01.915) 0:00:14.337 ********** 2025-04-13 00:46:40.298239 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-13 00:46:40.298258 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.298271 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.298284 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:46:40.298297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-13 00:46:40.298315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.298337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.298350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-13 00:46:40.298364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.298393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.298407 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:46:40.298420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-13 00:46:40.298433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.298452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.298465 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:46:40.298478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-13 00:46:40.298494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.298507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.298520 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:46:40.298532 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:46:40.298547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-13 00:46:40.298574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.298588 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.298606 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:46:40.298619 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-13 00:46:40.298632 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.298645 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.298657 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:46:40.298670 | orchestrator | 2025-04-13 00:46:40.298682 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-04-13 00:46:40.298712 | orchestrator | Sunday 13 April 2025 00:44:41 +0000 (0:00:02.633) 0:00:16.971 ********** 2025-04-13 00:46:40.298725 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:46:40.298737 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:46:40.298749 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:46:40.298761 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:46:40.298773 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:46:40.298785 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:46:40.298798 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:46:40.298810 | orchestrator | 2025-04-13 00:46:40.298822 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-04-13 00:46:40.298834 | orchestrator | Sunday 13 April 2025 00:44:42 +0000 (0:00:00.890) 0:00:17.862 ********** 2025-04-13 00:46:40.298847 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:46:40.298859 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:46:40.298871 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:46:40.298883 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:46:40.298895 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:46:40.298908 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:46:40.298920 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:46:40.298932 | orchestrator | 2025-04-13 00:46:40.298944 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-04-13 00:46:40.298956 | orchestrator | Sunday 13 April 2025 00:44:43 +0000 (0:00:00.994) 0:00:18.856 ********** 2025-04-13 00:46:40.298968 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:46:40.298980 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:46:40.298992 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:46:40.299010 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:46:40.299022 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:46:40.299034 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:46:40.299046 | orchestrator | changed: [testbed-manager] 2025-04-13 00:46:40.299058 | orchestrator | 2025-04-13 00:46:40.299071 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-04-13 00:46:40.299083 | orchestrator | Sunday 13 April 2025 00:45:15 +0000 (0:00:31.547) 0:00:50.404 ********** 2025-04-13 00:46:40.299095 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:46:40.299113 | orchestrator | ok: [testbed-manager] 2025-04-13 00:46:40.299126 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:46:40.299138 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:46:40.299150 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:46:40.299162 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:46:40.299179 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:46:40.299191 | orchestrator | 2025-04-13 00:46:40.299204 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-04-13 00:46:40.299216 | orchestrator | Sunday 13 April 2025 00:45:18 +0000 (0:00:02.944) 0:00:53.349 ********** 2025-04-13 00:46:40.299228 | orchestrator | ok: [testbed-manager] 2025-04-13 00:46:40.299240 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:46:40.299252 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:46:40.299264 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:46:40.299277 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:46:40.299289 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:46:40.299301 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:46:40.299313 | orchestrator | 2025-04-13 00:46:40.299325 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-04-13 00:46:40.299338 | orchestrator | Sunday 13 April 2025 00:45:19 +0000 (0:00:01.099) 0:00:54.449 ********** 2025-04-13 00:46:40.299350 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:46:40.299362 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:46:40.299374 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:46:40.299386 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:46:40.299399 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:46:40.299411 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:46:40.299423 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:46:40.299435 | orchestrator | 2025-04-13 00:46:40.299447 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-04-13 00:46:40.299459 | orchestrator | Sunday 13 April 2025 00:45:20 +0000 (0:00:00.994) 0:00:55.443 ********** 2025-04-13 00:46:40.299471 | orchestrator | skipping: [testbed-manager] 2025-04-13 00:46:40.299483 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:46:40.299495 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:46:40.299507 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:46:40.299519 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:46:40.299531 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:46:40.299543 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:46:40.299556 | orchestrator | 2025-04-13 00:46:40.299568 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-04-13 00:46:40.299580 | orchestrator | Sunday 13 April 2025 00:45:21 +0000 (0:00:00.992) 0:00:56.436 ********** 2025-04-13 00:46:40.299593 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.299606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.299628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.299642 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.299661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.299675 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.299704 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.299718 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.299731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.299750 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.299767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.299786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.299800 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.299813 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.299825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.299838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.299856 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.299874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.299887 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.299912 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.299925 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.299938 | orchestrator | 2025-04-13 00:46:40.299951 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-04-13 00:46:40.299963 | orchestrator | Sunday 13 April 2025 00:45:25 +0000 (0:00:04.459) 0:01:00.895 ********** 2025-04-13 00:46:40.299976 | orchestrator | [WARNING]: Skipped 2025-04-13 00:46:40.299988 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-04-13 00:46:40.300000 | orchestrator | to this access issue: 2025-04-13 00:46:40.300013 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-04-13 00:46:40.300025 | orchestrator | directory 2025-04-13 00:46:40.300038 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-13 00:46:40.300050 | orchestrator | 2025-04-13 00:46:40.300063 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-04-13 00:46:40.300075 | orchestrator | Sunday 13 April 2025 00:45:26 +0000 (0:00:00.968) 0:01:01.864 ********** 2025-04-13 00:46:40.300087 | orchestrator | [WARNING]: Skipped 2025-04-13 00:46:40.300106 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-04-13 00:46:40.300126 | orchestrator | to this access issue: 2025-04-13 00:46:40.300148 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-04-13 00:46:40.300175 | orchestrator | directory 2025-04-13 00:46:40.300194 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-13 00:46:40.300213 | orchestrator | 2025-04-13 00:46:40.300233 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-04-13 00:46:40.300253 | orchestrator | Sunday 13 April 2025 00:45:27 +0000 (0:00:00.438) 0:01:02.303 ********** 2025-04-13 00:46:40.300272 | orchestrator | [WARNING]: Skipped 2025-04-13 00:46:40.300293 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-04-13 00:46:40.300314 | orchestrator | to this access issue: 2025-04-13 00:46:40.300336 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-04-13 00:46:40.300382 | orchestrator | directory 2025-04-13 00:46:40.300405 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-13 00:46:40.300425 | orchestrator | 2025-04-13 00:46:40.300438 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-04-13 00:46:40.300450 | orchestrator | Sunday 13 April 2025 00:45:27 +0000 (0:00:00.432) 0:01:02.735 ********** 2025-04-13 00:46:40.300462 | orchestrator | [WARNING]: Skipped 2025-04-13 00:46:40.300474 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-04-13 00:46:40.300486 | orchestrator | to this access issue: 2025-04-13 00:46:40.300499 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-04-13 00:46:40.300511 | orchestrator | directory 2025-04-13 00:46:40.300523 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-13 00:46:40.300535 | orchestrator | 2025-04-13 00:46:40.300548 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-04-13 00:46:40.300560 | orchestrator | Sunday 13 April 2025 00:45:28 +0000 (0:00:00.603) 0:01:03.339 ********** 2025-04-13 00:46:40.300572 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:46:40.300585 | orchestrator | changed: [testbed-manager] 2025-04-13 00:46:40.300597 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:46:40.300609 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:46:40.300621 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:46:40.300633 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:46:40.300646 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:46:40.300658 | orchestrator | 2025-04-13 00:46:40.300670 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-04-13 00:46:40.300682 | orchestrator | Sunday 13 April 2025 00:45:31 +0000 (0:00:03.508) 0:01:06.848 ********** 2025-04-13 00:46:40.300724 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-13 00:46:40.300739 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-13 00:46:40.300752 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-13 00:46:40.300764 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-13 00:46:40.300776 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-13 00:46:40.300789 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-13 00:46:40.300801 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-13 00:46:40.300813 | orchestrator | 2025-04-13 00:46:40.300825 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-04-13 00:46:40.300837 | orchestrator | Sunday 13 April 2025 00:45:34 +0000 (0:00:02.911) 0:01:09.759 ********** 2025-04-13 00:46:40.300850 | orchestrator | changed: [testbed-manager] 2025-04-13 00:46:40.300862 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:46:40.300874 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:46:40.300886 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:46:40.300908 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:46:40.300929 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:46:40.300941 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:46:40.300954 | orchestrator | 2025-04-13 00:46:40.300966 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-04-13 00:46:40.300978 | orchestrator | Sunday 13 April 2025 00:45:37 +0000 (0:00:03.223) 0:01:12.983 ********** 2025-04-13 00:46:40.301001 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.301015 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.301028 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.301041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.301054 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.301073 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.301086 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.301122 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.301136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.301153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.301166 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.301179 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.301191 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.301204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.301236 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/r2025-04-13 00:46:40 | INFO  | Task b0761ba1-7a4d-4834-86f2-0ed7c8bd12cf is in state SUCCESS 2025-04-13 00:46:40.301261 | orchestrator | elease/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.301289 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.301308 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.301350 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.301376 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.301406 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:46:40.301434 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.301469 | orchestrator | 2025-04-13 00:46:40.301493 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-04-13 00:46:40.301516 | orchestrator | Sunday 13 April 2025 00:45:40 +0000 (0:00:02.728) 0:01:15.711 ********** 2025-04-13 00:46:40.301538 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-13 00:46:40.301561 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-13 00:46:40.301583 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-13 00:46:40.301604 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-13 00:46:40.301625 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-13 00:46:40.301648 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-13 00:46:40.301682 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-13 00:46:40.301755 | orchestrator | 2025-04-13 00:46:40.301778 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-04-13 00:46:40.301791 | orchestrator | Sunday 13 April 2025 00:45:43 +0000 (0:00:03.006) 0:01:18.718 ********** 2025-04-13 00:46:40.301804 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-13 00:46:40.301816 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-13 00:46:40.301829 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-13 00:46:40.301841 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-13 00:46:40.301853 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-13 00:46:40.301865 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-13 00:46:40.301877 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-13 00:46:40.301889 | orchestrator | 2025-04-13 00:46:40.301901 | orchestrator | TASK [common : Check common containers] **************************************** 2025-04-13 00:46:40.301913 | orchestrator | Sunday 13 April 2025 00:45:46 +0000 (0:00:02.457) 0:01:21.176 ********** 2025-04-13 00:46:40.301927 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.301940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.301953 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.301976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.301989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.302050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.302073 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.302086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.302100 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.302113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.302133 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.302146 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.302166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.302179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.302192 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-13 00:46:40.302204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.302222 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.302241 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.302254 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.302267 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.302280 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:46:40.302292 | orchestrator | 2025-04-13 00:46:40.302304 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-04-13 00:46:40.302322 | orchestrator | Sunday 13 April 2025 00:45:50 +0000 (0:00:04.120) 0:01:25.297 ********** 2025-04-13 00:46:40.302335 | orchestrator | changed: [testbed-manager] 2025-04-13 00:46:40.302348 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:46:40.302360 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:46:40.302372 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:46:40.302385 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:46:40.302397 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:46:40.302409 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:46:40.302426 | orchestrator | 2025-04-13 00:46:40.302440 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-04-13 00:46:40.302461 | orchestrator | Sunday 13 April 2025 00:45:52 +0000 (0:00:02.069) 0:01:27.367 ********** 2025-04-13 00:46:40.302482 | orchestrator | changed: [testbed-manager] 2025-04-13 00:46:40.302503 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:46:40.302525 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:46:40.302548 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:46:40.302571 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:46:40.302593 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:46:40.302614 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:46:40.302634 | orchestrator | 2025-04-13 00:46:40.302646 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-13 00:46:40.302658 | orchestrator | Sunday 13 April 2025 00:45:54 +0000 (0:00:01.885) 0:01:29.252 ********** 2025-04-13 00:46:40.302671 | orchestrator | 2025-04-13 00:46:40.302683 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-13 00:46:40.302718 | orchestrator | Sunday 13 April 2025 00:45:54 +0000 (0:00:00.072) 0:01:29.325 ********** 2025-04-13 00:46:40.302731 | orchestrator | 2025-04-13 00:46:40.302791 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-13 00:46:40.302813 | orchestrator | Sunday 13 April 2025 00:45:54 +0000 (0:00:00.069) 0:01:29.395 ********** 2025-04-13 00:46:40.302825 | orchestrator | 2025-04-13 00:46:40.302838 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-13 00:46:40.302850 | orchestrator | Sunday 13 April 2025 00:45:54 +0000 (0:00:00.056) 0:01:29.451 ********** 2025-04-13 00:46:40.302862 | orchestrator | 2025-04-13 00:46:40.302875 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-13 00:46:40.302887 | orchestrator | Sunday 13 April 2025 00:45:54 +0000 (0:00:00.277) 0:01:29.729 ********** 2025-04-13 00:46:40.302899 | orchestrator | 2025-04-13 00:46:40.302911 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-13 00:46:40.302923 | orchestrator | Sunday 13 April 2025 00:45:54 +0000 (0:00:00.057) 0:01:29.787 ********** 2025-04-13 00:46:40.302936 | orchestrator | 2025-04-13 00:46:40.302948 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-13 00:46:40.302960 | orchestrator | Sunday 13 April 2025 00:45:54 +0000 (0:00:00.069) 0:01:29.856 ********** 2025-04-13 00:46:40.302972 | orchestrator | 2025-04-13 00:46:40.302985 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-04-13 00:46:40.302997 | orchestrator | Sunday 13 April 2025 00:45:54 +0000 (0:00:00.093) 0:01:29.950 ********** 2025-04-13 00:46:40.303009 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:46:40.303021 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:46:40.303033 | orchestrator | changed: [testbed-manager] 2025-04-13 00:46:40.303045 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:46:40.303057 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:46:40.303070 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:46:40.303082 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:46:40.303094 | orchestrator | 2025-04-13 00:46:40.303106 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-04-13 00:46:40.303118 | orchestrator | Sunday 13 April 2025 00:46:04 +0000 (0:00:09.081) 0:01:39.031 ********** 2025-04-13 00:46:40.303130 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:46:40.303142 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:46:40.303154 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:46:40.303166 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:46:40.303178 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:46:40.303190 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:46:40.303202 | orchestrator | changed: [testbed-manager] 2025-04-13 00:46:40.303215 | orchestrator | 2025-04-13 00:46:40.303227 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-04-13 00:46:40.303239 | orchestrator | Sunday 13 April 2025 00:46:27 +0000 (0:00:23.081) 0:02:02.112 ********** 2025-04-13 00:46:40.303252 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:46:40.303264 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:46:40.303276 | orchestrator | ok: [testbed-manager] 2025-04-13 00:46:40.303288 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:46:40.303300 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:46:40.303313 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:46:40.303325 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:46:40.303337 | orchestrator | 2025-04-13 00:46:40.303349 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-04-13 00:46:40.303362 | orchestrator | Sunday 13 April 2025 00:46:29 +0000 (0:00:02.710) 0:02:04.823 ********** 2025-04-13 00:46:40.303374 | orchestrator | changed: [testbed-manager] 2025-04-13 00:46:40.303386 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:46:40.303399 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:46:40.303411 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:46:40.303423 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:46:40.303435 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:46:40.303447 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:46:40.303459 | orchestrator | 2025-04-13 00:46:40.303472 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:46:40.303490 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-13 00:46:40.303503 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-13 00:46:40.303524 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-13 00:46:43.338476 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-13 00:46:43.338628 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-13 00:46:43.338656 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-13 00:46:43.338742 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-13 00:46:43.338769 | orchestrator | 2025-04-13 00:46:43.338792 | orchestrator | 2025-04-13 00:46:43.338814 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 00:46:43.338838 | orchestrator | Sunday 13 April 2025 00:46:39 +0000 (0:00:09.668) 0:02:14.492 ********** 2025-04-13 00:46:43.338860 | orchestrator | =============================================================================== 2025-04-13 00:46:43.338883 | orchestrator | common : Ensure fluentd image is present for label check --------------- 31.55s 2025-04-13 00:46:43.338905 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 23.08s 2025-04-13 00:46:43.338951 | orchestrator | common : Restart cron container ----------------------------------------- 9.67s 2025-04-13 00:46:43.338973 | orchestrator | common : Restart fluentd container -------------------------------------- 9.08s 2025-04-13 00:46:43.338997 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.98s 2025-04-13 00:46:43.339023 | orchestrator | common : Copying over config.json files for services -------------------- 4.46s 2025-04-13 00:46:43.339047 | orchestrator | common : Check common containers ---------------------------------------- 4.12s 2025-04-13 00:46:43.339073 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.75s 2025-04-13 00:46:43.339102 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 3.51s 2025-04-13 00:46:43.339132 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.22s 2025-04-13 00:46:43.339157 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.01s 2025-04-13 00:46:43.339183 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 2.94s 2025-04-13 00:46:43.339208 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.91s 2025-04-13 00:46:43.339235 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.73s 2025-04-13 00:46:43.339261 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.71s 2025-04-13 00:46:43.339288 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.63s 2025-04-13 00:46:43.339314 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.46s 2025-04-13 00:46:43.339337 | orchestrator | common : Creating log volume -------------------------------------------- 2.07s 2025-04-13 00:46:43.339363 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.92s 2025-04-13 00:46:43.339388 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.89s 2025-04-13 00:46:43.339414 | orchestrator | 2025-04-13 00:46:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:46:43.339438 | orchestrator | 2025-04-13 00:46:40 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:46:43.339496 | orchestrator | 2025-04-13 00:46:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:46:43.339544 | orchestrator | 2025-04-13 00:46:43 | INFO  | Task edd924c5-2b7e-47e0-bd56-aa1b1a1a9439 is in state STARTED 2025-04-13 00:46:43.340001 | orchestrator | 2025-04-13 00:46:43 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:46:43.340117 | orchestrator | 2025-04-13 00:46:43 | INFO  | Task 98209221-3ad8-456b-808c-f95d54430ade is in state STARTED 2025-04-13 00:46:43.340152 | orchestrator | 2025-04-13 00:46:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:46:43.340584 | orchestrator | 2025-04-13 00:46:43 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:46:43.341475 | orchestrator | 2025-04-13 00:46:43 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:46:46.374387 | orchestrator | 2025-04-13 00:46:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:46:46.374563 | orchestrator | 2025-04-13 00:46:46 | INFO  | Task edd924c5-2b7e-47e0-bd56-aa1b1a1a9439 is in state STARTED 2025-04-13 00:46:46.374880 | orchestrator | 2025-04-13 00:46:46 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:46:46.375213 | orchestrator | 2025-04-13 00:46:46 | INFO  | Task 98209221-3ad8-456b-808c-f95d54430ade is in state STARTED 2025-04-13 00:46:46.375888 | orchestrator | 2025-04-13 00:46:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:46:46.376392 | orchestrator | 2025-04-13 00:46:46 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:46:46.376971 | orchestrator | 2025-04-13 00:46:46 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:46:49.426082 | orchestrator | 2025-04-13 00:46:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:46:49.426272 | orchestrator | 2025-04-13 00:46:49 | INFO  | Task edd924c5-2b7e-47e0-bd56-aa1b1a1a9439 is in state STARTED 2025-04-13 00:46:49.428508 | orchestrator | 2025-04-13 00:46:49 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:46:49.440961 | orchestrator | 2025-04-13 00:46:49 | INFO  | Task 98209221-3ad8-456b-808c-f95d54430ade is in state STARTED 2025-04-13 00:46:49.441065 | orchestrator | 2025-04-13 00:46:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:46:49.441096 | orchestrator | 2025-04-13 00:46:49 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:46:52.485226 | orchestrator | 2025-04-13 00:46:49 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:46:52.485400 | orchestrator | 2025-04-13 00:46:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:46:52.485443 | orchestrator | 2025-04-13 00:46:52 | INFO  | Task edd924c5-2b7e-47e0-bd56-aa1b1a1a9439 is in state STARTED 2025-04-13 00:46:52.485526 | orchestrator | 2025-04-13 00:46:52 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:46:52.488851 | orchestrator | 2025-04-13 00:46:52 | INFO  | Task 98209221-3ad8-456b-808c-f95d54430ade is in state STARTED 2025-04-13 00:46:55.530321 | orchestrator | 2025-04-13 00:46:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:46:55.530449 | orchestrator | 2025-04-13 00:46:52 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:46:55.530469 | orchestrator | 2025-04-13 00:46:52 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:46:55.530514 | orchestrator | 2025-04-13 00:46:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:46:55.530547 | orchestrator | 2025-04-13 00:46:55 | INFO  | Task edd924c5-2b7e-47e0-bd56-aa1b1a1a9439 is in state STARTED 2025-04-13 00:46:55.533349 | orchestrator | 2025-04-13 00:46:55 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:46:55.536275 | orchestrator | 2025-04-13 00:46:55 | INFO  | Task 98209221-3ad8-456b-808c-f95d54430ade is in state STARTED 2025-04-13 00:46:55.540130 | orchestrator | 2025-04-13 00:46:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:46:55.541639 | orchestrator | 2025-04-13 00:46:55 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:46:55.544239 | orchestrator | 2025-04-13 00:46:55 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:46:58.584048 | orchestrator | 2025-04-13 00:46:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:46:58.584241 | orchestrator | 2025-04-13 00:46:58 | INFO  | Task edd924c5-2b7e-47e0-bd56-aa1b1a1a9439 is in state STARTED 2025-04-13 00:46:58.584331 | orchestrator | 2025-04-13 00:46:58 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:46:58.585250 | orchestrator | 2025-04-13 00:46:58 | INFO  | Task 98209221-3ad8-456b-808c-f95d54430ade is in state STARTED 2025-04-13 00:46:58.585878 | orchestrator | 2025-04-13 00:46:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:46:58.586859 | orchestrator | 2025-04-13 00:46:58 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:46:58.587816 | orchestrator | 2025-04-13 00:46:58 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:46:58.587985 | orchestrator | 2025-04-13 00:46:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:47:01.634503 | orchestrator | 2025-04-13 00:47:01 | INFO  | Task edd924c5-2b7e-47e0-bd56-aa1b1a1a9439 is in state STARTED 2025-04-13 00:47:01.635657 | orchestrator | 2025-04-13 00:47:01 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:47:01.638166 | orchestrator | 2025-04-13 00:47:01 | INFO  | Task 98209221-3ad8-456b-808c-f95d54430ade is in state STARTED 2025-04-13 00:47:01.638233 | orchestrator | 2025-04-13 00:47:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:47:01.639030 | orchestrator | 2025-04-13 00:47:01 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:47:01.639790 | orchestrator | 2025-04-13 00:47:01 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:47:01.639937 | orchestrator | 2025-04-13 00:47:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:47:04.685313 | orchestrator | 2025-04-13 00:47:04 | INFO  | Task edd924c5-2b7e-47e0-bd56-aa1b1a1a9439 is in state SUCCESS 2025-04-13 00:47:04.688240 | orchestrator | 2025-04-13 00:47:04 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:47:04.690535 | orchestrator | 2025-04-13 00:47:04 | INFO  | Task 98209221-3ad8-456b-808c-f95d54430ade is in state STARTED 2025-04-13 00:47:04.693919 | orchestrator | 2025-04-13 00:47:04 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:47:04.695909 | orchestrator | 2025-04-13 00:47:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:47:04.697974 | orchestrator | 2025-04-13 00:47:04 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:47:04.700337 | orchestrator | 2025-04-13 00:47:04 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:47:04.700457 | orchestrator | 2025-04-13 00:47:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:47:07.740941 | orchestrator | 2025-04-13 00:47:07 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:47:07.741724 | orchestrator | 2025-04-13 00:47:07 | INFO  | Task 98209221-3ad8-456b-808c-f95d54430ade is in state SUCCESS 2025-04-13 00:47:07.741754 | orchestrator | 2025-04-13 00:47:07.741762 | orchestrator | 2025-04-13 00:47:07.741769 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 00:47:07.741778 | orchestrator | 2025-04-13 00:47:07.741785 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 00:47:07.741792 | orchestrator | Sunday 13 April 2025 00:46:44 +0000 (0:00:00.475) 0:00:00.475 ********** 2025-04-13 00:47:07.741798 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:47:07.741807 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:47:07.741813 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:47:07.741820 | orchestrator | 2025-04-13 00:47:07.741827 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 00:47:07.741834 | orchestrator | Sunday 13 April 2025 00:46:44 +0000 (0:00:00.496) 0:00:00.972 ********** 2025-04-13 00:47:07.741841 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-04-13 00:47:07.741848 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-04-13 00:47:07.741854 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-04-13 00:47:07.741861 | orchestrator | 2025-04-13 00:47:07.741868 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-04-13 00:47:07.741875 | orchestrator | 2025-04-13 00:47:07.741882 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-04-13 00:47:07.741888 | orchestrator | Sunday 13 April 2025 00:46:45 +0000 (0:00:00.304) 0:00:01.276 ********** 2025-04-13 00:47:07.741894 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:47:07.741901 | orchestrator | 2025-04-13 00:47:07.741907 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-04-13 00:47:07.741913 | orchestrator | Sunday 13 April 2025 00:46:45 +0000 (0:00:00.681) 0:00:01.958 ********** 2025-04-13 00:47:07.741919 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-04-13 00:47:07.741925 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-04-13 00:47:07.741931 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-04-13 00:47:07.741936 | orchestrator | 2025-04-13 00:47:07.741942 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-04-13 00:47:07.741948 | orchestrator | Sunday 13 April 2025 00:46:46 +0000 (0:00:00.902) 0:00:02.861 ********** 2025-04-13 00:47:07.741954 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-04-13 00:47:07.741960 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-04-13 00:47:07.741966 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-04-13 00:47:07.741971 | orchestrator | 2025-04-13 00:47:07.741977 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-04-13 00:47:07.741983 | orchestrator | Sunday 13 April 2025 00:46:49 +0000 (0:00:02.596) 0:00:05.458 ********** 2025-04-13 00:47:07.741989 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:47:07.742008 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:47:07.742073 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:47:07.742082 | orchestrator | 2025-04-13 00:47:07.742092 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-04-13 00:47:07.742098 | orchestrator | Sunday 13 April 2025 00:46:52 +0000 (0:00:03.430) 0:00:08.888 ********** 2025-04-13 00:47:07.742104 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:47:07.742125 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:47:07.742131 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:47:07.742137 | orchestrator | 2025-04-13 00:47:07.742143 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:47:07.742148 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:47:07.742156 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:47:07.742162 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:47:07.742168 | orchestrator | 2025-04-13 00:47:07.742174 | orchestrator | 2025-04-13 00:47:07.742180 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 00:47:07.742186 | orchestrator | Sunday 13 April 2025 00:47:01 +0000 (0:00:08.740) 0:00:17.629 ********** 2025-04-13 00:47:07.742192 | orchestrator | =============================================================================== 2025-04-13 00:47:07.742198 | orchestrator | memcached : Restart memcached container --------------------------------- 8.74s 2025-04-13 00:47:07.742203 | orchestrator | memcached : Check memcached container ----------------------------------- 3.43s 2025-04-13 00:47:07.742209 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.60s 2025-04-13 00:47:07.742215 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.90s 2025-04-13 00:47:07.742221 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.68s 2025-04-13 00:47:07.742227 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.50s 2025-04-13 00:47:07.742233 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.30s 2025-04-13 00:47:07.742239 | orchestrator | 2025-04-13 00:47:07.742245 | orchestrator | 2025-04-13 00:47:07.742250 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 00:47:07.742256 | orchestrator | 2025-04-13 00:47:07.742262 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 00:47:07.742268 | orchestrator | Sunday 13 April 2025 00:46:44 +0000 (0:00:00.221) 0:00:00.221 ********** 2025-04-13 00:47:07.742273 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:47:07.742279 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:47:07.742285 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:47:07.742291 | orchestrator | 2025-04-13 00:47:07.742297 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 00:47:07.742309 | orchestrator | Sunday 13 April 2025 00:46:45 +0000 (0:00:00.309) 0:00:00.531 ********** 2025-04-13 00:47:07.742316 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-04-13 00:47:07.742322 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-04-13 00:47:07.742329 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-04-13 00:47:07.742338 | orchestrator | 2025-04-13 00:47:07.742348 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-04-13 00:47:07.742358 | orchestrator | 2025-04-13 00:47:07.742368 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-04-13 00:47:07.742377 | orchestrator | Sunday 13 April 2025 00:46:45 +0000 (0:00:00.360) 0:00:00.892 ********** 2025-04-13 00:47:07.742387 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:47:07.742396 | orchestrator | 2025-04-13 00:47:07.742405 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-04-13 00:47:07.742414 | orchestrator | Sunday 13 April 2025 00:46:46 +0000 (0:00:00.706) 0:00:01.598 ********** 2025-04-13 00:47:07.742425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742506 | orchestrator | 2025-04-13 00:47:07.742515 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-04-13 00:47:07.742524 | orchestrator | Sunday 13 April 2025 00:46:48 +0000 (0:00:01.766) 0:00:03.365 ********** 2025-04-13 00:47:07.742532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742612 | orchestrator | 2025-04-13 00:47:07.742637 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-04-13 00:47:07.742647 | orchestrator | Sunday 13 April 2025 00:46:51 +0000 (0:00:03.322) 0:00:06.688 ********** 2025-04-13 00:47:07.742665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742729 | orchestrator | 2025-04-13 00:47:07.742747 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-04-13 00:47:07.742758 | orchestrator | Sunday 13 April 2025 00:46:55 +0000 (0:00:03.843) 0:00:10.531 ********** 2025-04-13 00:47:07.742768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-13 00:47:07.742901 | orchestrator | 2025-04-13 00:47:07.742910 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-13 00:47:07.742916 | orchestrator | Sunday 13 April 2025 00:46:57 +0000 (0:00:02.323) 0:00:12.854 ********** 2025-04-13 00:47:07.742922 | orchestrator | 2025-04-13 00:47:07.742928 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-13 00:47:07.742934 | orchestrator | Sunday 13 April 2025 00:46:57 +0000 (0:00:00.152) 0:00:13.007 ********** 2025-04-13 00:47:07.742940 | orchestrator | 2025-04-13 00:47:07.742946 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-13 00:47:07.742951 | orchestrator | Sunday 13 April 2025 00:46:57 +0000 (0:00:00.134) 0:00:13.141 ********** 2025-04-13 00:47:07.742957 | orchestrator | 2025-04-13 00:47:07.742963 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-04-13 00:47:07.742969 | orchestrator | Sunday 13 April 2025 00:46:58 +0000 (0:00:00.441) 0:00:13.582 ********** 2025-04-13 00:47:07.742975 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:47:07.742981 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:47:07.742987 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:47:07.742993 | orchestrator | 2025-04-13 00:47:07.742999 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-04-13 00:47:07.743005 | orchestrator | Sunday 13 April 2025 00:47:01 +0000 (0:00:03.331) 0:00:16.914 ********** 2025-04-13 00:47:07.743011 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:47:07.743021 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:47:07.743027 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:47:07.743033 | orchestrator | 2025-04-13 00:47:07.743039 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:47:07.743045 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:47:07.743051 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:47:07.743057 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:47:07.743063 | orchestrator | 2025-04-13 00:47:07.743069 | orchestrator | 2025-04-13 00:47:07.743075 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 00:47:07.743081 | orchestrator | Sunday 13 April 2025 00:47:06 +0000 (0:00:04.769) 0:00:21.683 ********** 2025-04-13 00:47:07.743087 | orchestrator | =============================================================================== 2025-04-13 00:47:07.743092 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.77s 2025-04-13 00:47:07.743098 | orchestrator | redis : Copying over redis config files --------------------------------- 3.84s 2025-04-13 00:47:07.743104 | orchestrator | redis : Restart redis container ----------------------------------------- 3.33s 2025-04-13 00:47:07.743110 | orchestrator | redis : Copying over default config.json files -------------------------- 3.32s 2025-04-13 00:47:07.743115 | orchestrator | redis : Check redis containers ------------------------------------------ 2.32s 2025-04-13 00:47:07.743121 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.77s 2025-04-13 00:47:07.743127 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.73s 2025-04-13 00:47:07.743133 | orchestrator | redis : include_tasks --------------------------------------------------- 0.71s 2025-04-13 00:47:07.743138 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.36s 2025-04-13 00:47:07.743144 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-04-13 00:47:07.743153 | orchestrator | 2025-04-13 00:47:07 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:47:07.745532 | orchestrator | 2025-04-13 00:47:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:47:07.747123 | orchestrator | 2025-04-13 00:47:07 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:47:07.747906 | orchestrator | 2025-04-13 00:47:07 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:47:10.788406 | orchestrator | 2025-04-13 00:47:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:47:10.788518 | orchestrator | 2025-04-13 00:47:10 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:47:10.789007 | orchestrator | 2025-04-13 00:47:10 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:47:10.789026 | orchestrator | 2025-04-13 00:47:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:47:10.789039 | orchestrator | 2025-04-13 00:47:10 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:47:10.789697 | orchestrator | 2025-04-13 00:47:10 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:47:13.829698 | orchestrator | 2025-04-13 00:47:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:47:13.829805 | orchestrator | 2025-04-13 00:47:13 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:47:16.880726 | orchestrator | 2025-04-13 00:47:13 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:47:16.880916 | orchestrator | 2025-04-13 00:47:13 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:47:16.880944 | orchestrator | 2025-04-13 00:47:13 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:47:16.880959 | orchestrator | 2025-04-13 00:47:13 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:47:16.880979 | orchestrator | 2025-04-13 00:47:13 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:47:16.881013 | orchestrator | 2025-04-13 00:47:16 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:47:16.881097 | orchestrator | 2025-04-13 00:47:16 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:47:16.881895 | orchestrator | 2025-04-13 00:47:16 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:47:16.883254 | orchestrator | 2025-04-13 00:47:16 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:47:16.885213 | orchestrator | 2025-04-13 00:47:16 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:47:19.929132 | orchestrator | 2025-04-13 00:47:16 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:47:19.929270 | orchestrator | 2025-04-13 00:47:19 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:47:19.929901 | orchestrator | 2025-04-13 00:47:19 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:47:19.929944 | orchestrator | 2025-04-13 00:47:19 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:47:19.930540 | orchestrator | 2025-04-13 00:47:19 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:47:19.931278 | orchestrator | 2025-04-13 00:47:19 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:47:22.969469 | orchestrator | 2025-04-13 00:47:19 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:47:22.969697 | orchestrator | 2025-04-13 00:47:22 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:47:22.973748 | orchestrator | 2025-04-13 00:47:22 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:47:22.975697 | orchestrator | 2025-04-13 00:47:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:47:22.983173 | orchestrator | 2025-04-13 00:47:22 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:47:22.984874 | orchestrator | 2025-04-13 00:47:22 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:47:26.021675 | orchestrator | 2025-04-13 00:47:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:47:26.021807 | orchestrator | 2025-04-13 00:47:26 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:47:26.027317 | orchestrator | 2025-04-13 00:47:26 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:47:26.028077 | orchestrator | 2025-04-13 00:47:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:47:26.031644 | orchestrator | 2025-04-13 00:47:26 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:47:26.033211 | orchestrator | 2025-04-13 00:47:26 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:47:29.068074 | orchestrator | 2025-04-13 00:47:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:47:29.068218 | orchestrator | 2025-04-13 00:47:29 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:47:29.068850 | orchestrator | 2025-04-13 00:47:29 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:47:29.068893 | orchestrator | 2025-04-13 00:47:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:47:29.072947 | orchestrator | 2025-04-13 00:47:29 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:47:29.073552 | orchestrator | 2025-04-13 00:47:29 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:47:29.073691 | orchestrator | 2025-04-13 00:47:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:47:32.112400 | orchestrator | 2025-04-13 00:47:32 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:47:32.112719 | orchestrator | 2025-04-13 00:47:32 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:47:32.112769 | orchestrator | 2025-04-13 00:47:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:47:32.113449 | orchestrator | 2025-04-13 00:47:32 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:47:32.114200 | orchestrator | 2025-04-13 00:47:32 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:47:35.144951 | orchestrator | 2025-04-13 00:47:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:47:35.145106 | orchestrator | 2025-04-13 00:47:35 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:47:35.146848 | orchestrator | 2025-04-13 00:47:35 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:47:35.148642 | orchestrator | 2025-04-13 00:47:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:47:35.151535 | orchestrator | 2025-04-13 00:47:35 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:47:35.155840 | orchestrator | 2025-04-13 00:47:35 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:47:35.156171 | orchestrator | 2025-04-13 00:47:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:47:38.187186 | orchestrator | 2025-04-13 00:47:38 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:47:38.187660 | orchestrator | 2025-04-13 00:47:38 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:47:38.189129 | orchestrator | 2025-04-13 00:47:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:47:38.191033 | orchestrator | 2025-04-13 00:47:38 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:47:38.191135 | orchestrator | 2025-04-13 00:47:38 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:47:41.240737 | orchestrator | 2025-04-13 00:47:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:47:41.240870 | orchestrator | 2025-04-13 00:47:41 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:47:41.242685 | orchestrator | 2025-04-13 00:47:41 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:47:41.244020 | orchestrator | 2025-04-13 00:47:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:47:41.244918 | orchestrator | 2025-04-13 00:47:41 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:47:41.245455 | orchestrator | 2025-04-13 00:47:41 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:47:41.246986 | orchestrator | 2025-04-13 00:47:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:47:44.286982 | orchestrator | 2025-04-13 00:47:44 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:47:44.292798 | orchestrator | 2025-04-13 00:47:44 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:47:44.297572 | orchestrator | 2025-04-13 00:47:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:47:44.302339 | orchestrator | 2025-04-13 00:47:44 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:47:44.305055 | orchestrator | 2025-04-13 00:47:44 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:47:44.305313 | orchestrator | 2025-04-13 00:47:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:47:47.348451 | orchestrator | 2025-04-13 00:47:47 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:47:47.348724 | orchestrator | 2025-04-13 00:47:47 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:47:47.352212 | orchestrator | 2025-04-13 00:47:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:47:47.352896 | orchestrator | 2025-04-13 00:47:47 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:47:47.353521 | orchestrator | 2025-04-13 00:47:47 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:47:50.381887 | orchestrator | 2025-04-13 00:47:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:47:50.382137 | orchestrator | 2025-04-13 00:47:50 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:47:50.382237 | orchestrator | 2025-04-13 00:47:50 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:47:50.383961 | orchestrator | 2025-04-13 00:47:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:47:50.384693 | orchestrator | 2025-04-13 00:47:50 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:47:50.385544 | orchestrator | 2025-04-13 00:47:50 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:47:53.429505 | orchestrator | 2025-04-13 00:47:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:47:53.429671 | orchestrator | 2025-04-13 00:47:53 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:47:53.430802 | orchestrator | 2025-04-13 00:47:53 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:47:53.435194 | orchestrator | 2025-04-13 00:47:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:47:53.435624 | orchestrator | 2025-04-13 00:47:53 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:47:56.469905 | orchestrator | 2025-04-13 00:47:53 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state STARTED 2025-04-13 00:47:56.470105 | orchestrator | 2025-04-13 00:47:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:47:56.470156 | orchestrator | 2025-04-13 00:47:56 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:47:56.470662 | orchestrator | 2025-04-13 00:47:56 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:47:56.473680 | orchestrator | 2025-04-13 00:47:56 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:47:56.475785 | orchestrator | 2025-04-13 00:47:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:47:56.477592 | orchestrator | 2025-04-13 00:47:56 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:47:56.481913 | orchestrator | 2025-04-13 00:47:56.482065 | orchestrator | 2025-04-13 00:47:56.482083 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 00:47:56.482095 | orchestrator | 2025-04-13 00:47:56.482106 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 00:47:56.482138 | orchestrator | Sunday 13 April 2025 00:46:44 +0000 (0:00:00.449) 0:00:00.449 ********** 2025-04-13 00:47:56.482149 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:47:56.482160 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:47:56.482170 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:47:56.482179 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:47:56.482188 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:47:56.482197 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:47:56.482206 | orchestrator | 2025-04-13 00:47:56.482215 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 00:47:56.482225 | orchestrator | Sunday 13 April 2025 00:46:45 +0000 (0:00:00.716) 0:00:01.166 ********** 2025-04-13 00:47:56.482234 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-13 00:47:56.482244 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-13 00:47:56.482256 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-13 00:47:56.482265 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-13 00:47:56.482275 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-13 00:47:56.482298 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-13 00:47:56.482309 | orchestrator | 2025-04-13 00:47:56.482319 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-04-13 00:47:56.482329 | orchestrator | 2025-04-13 00:47:56.482339 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-04-13 00:47:56.482349 | orchestrator | Sunday 13 April 2025 00:46:46 +0000 (0:00:00.793) 0:00:01.959 ********** 2025-04-13 00:47:56.482378 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:47:56.482390 | orchestrator | 2025-04-13 00:47:56.482400 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-04-13 00:47:56.482410 | orchestrator | Sunday 13 April 2025 00:46:48 +0000 (0:00:02.284) 0:00:04.245 ********** 2025-04-13 00:47:56.482420 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-04-13 00:47:56.482431 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-04-13 00:47:56.482442 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-04-13 00:47:56.482453 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-04-13 00:47:56.482464 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-04-13 00:47:56.482475 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-04-13 00:47:56.482486 | orchestrator | 2025-04-13 00:47:56.482497 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-04-13 00:47:56.482562 | orchestrator | Sunday 13 April 2025 00:46:50 +0000 (0:00:02.404) 0:00:06.650 ********** 2025-04-13 00:47:56.482573 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-04-13 00:47:56.482588 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-04-13 00:47:56.482599 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-04-13 00:47:56.482608 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-04-13 00:47:56.482617 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-04-13 00:47:56.482626 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-04-13 00:47:56.482635 | orchestrator | 2025-04-13 00:47:56.482645 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-04-13 00:47:56.482654 | orchestrator | Sunday 13 April 2025 00:46:53 +0000 (0:00:02.791) 0:00:09.441 ********** 2025-04-13 00:47:56.482663 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-04-13 00:47:56.482672 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:47:56.482683 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-04-13 00:47:56.482691 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:47:56.482701 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-04-13 00:47:56.482710 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:47:56.482726 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-04-13 00:47:56.482736 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:47:56.482745 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-04-13 00:47:56.482754 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:47:56.482763 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-04-13 00:47:56.482773 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:47:56.482782 | orchestrator | 2025-04-13 00:47:56.482791 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-04-13 00:47:56.482800 | orchestrator | Sunday 13 April 2025 00:46:55 +0000 (0:00:01.625) 0:00:11.067 ********** 2025-04-13 00:47:56.482809 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:47:56.482818 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:47:56.482827 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:47:56.482836 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:47:56.482846 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:47:56.482855 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:47:56.482864 | orchestrator | 2025-04-13 00:47:56.482873 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-04-13 00:47:56.482882 | orchestrator | Sunday 13 April 2025 00:46:56 +0000 (0:00:01.106) 0:00:12.174 ********** 2025-04-13 00:47:56.482905 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-13 00:47:56.482925 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-13 00:47:56.482935 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-13 00:47:56.482946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-13 00:47:56.482955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-13 00:47:56.482969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-13 00:47:56.482984 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-13 00:47:56.482995 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483014 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483052 | orchestrator | 2025-04-13 00:47:56.483062 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-04-13 00:47:56.483071 | orchestrator | Sunday 13 April 2025 00:46:58 +0000 (0:00:02.181) 0:00:14.356 ********** 2025-04-13 00:47:56.483080 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483090 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483100 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483162 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483183 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483193 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483247 | orchestrator | 2025-04-13 00:47:56.483257 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-04-13 00:47:56.483266 | orchestrator | Sunday 13 April 2025 00:47:01 +0000 (0:00:02.857) 0:00:17.213 ********** 2025-04-13 00:47:56.483275 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:47:56.483284 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:47:56.483293 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:47:56.483303 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:47:56.483312 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:47:56.483321 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:47:56.483331 | orchestrator | 2025-04-13 00:47:56.483340 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-04-13 00:47:56.483349 | orchestrator | Sunday 13 April 2025 00:47:04 +0000 (0:00:03.177) 0:00:20.390 ********** 2025-04-13 00:47:56.483358 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:47:56.483367 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:47:56.483377 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:47:56.483386 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:47:56.483395 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:47:56.483404 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:47:56.483413 | orchestrator | 2025-04-13 00:47:56.483422 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-04-13 00:47:56.483431 | orchestrator | Sunday 13 April 2025 00:47:07 +0000 (0:00:03.040) 0:00:23.431 ********** 2025-04-13 00:47:56.483441 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:47:56.483450 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:47:56.483459 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:47:56.483468 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:47:56.483477 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:47:56.483486 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:47:56.483495 | orchestrator | 2025-04-13 00:47:56.483529 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-04-13 00:47:56.483540 | orchestrator | Sunday 13 April 2025 00:47:08 +0000 (0:00:01.214) 0:00:24.645 ********** 2025-04-13 00:47:56.483549 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483564 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483578 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483616 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483635 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483645 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-13 00:47:56.483831 | orchestrator | 2025-04-13 00:47:56.483841 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-13 00:47:56.483850 | orchestrator | Sunday 13 April 2025 00:47:11 +0000 (0:00:02.925) 0:00:27.571 ********** 2025-04-13 00:47:56.483860 | orchestrator | 2025-04-13 00:47:56.483870 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-13 00:47:56.483879 | orchestrator | Sunday 13 April 2025 00:47:12 +0000 (0:00:00.161) 0:00:27.733 ********** 2025-04-13 00:47:56.483889 | orchestrator | 2025-04-13 00:47:56.483898 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-13 00:47:56.483907 | orchestrator | Sunday 13 April 2025 00:47:12 +0000 (0:00:00.304) 0:00:28.037 ********** 2025-04-13 00:47:56.483917 | orchestrator | 2025-04-13 00:47:56.483926 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-13 00:47:56.483935 | orchestrator | Sunday 13 April 2025 00:47:12 +0000 (0:00:00.137) 0:00:28.175 ********** 2025-04-13 00:47:56.483944 | orchestrator | 2025-04-13 00:47:56.483958 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-13 00:47:56.483967 | orchestrator | Sunday 13 April 2025 00:47:12 +0000 (0:00:00.305) 0:00:28.480 ********** 2025-04-13 00:47:56.483976 | orchestrator | 2025-04-13 00:47:56.483986 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-13 00:47:56.483995 | orchestrator | Sunday 13 April 2025 00:47:12 +0000 (0:00:00.137) 0:00:28.618 ********** 2025-04-13 00:47:56.484005 | orchestrator | 2025-04-13 00:47:56.484014 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-04-13 00:47:56.484023 | orchestrator | Sunday 13 April 2025 00:47:13 +0000 (0:00:00.340) 0:00:28.958 ********** 2025-04-13 00:47:56.484033 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:47:56.484042 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:47:56.484051 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:47:56.484061 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:47:56.484070 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:47:56.484079 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:47:56.484088 | orchestrator | 2025-04-13 00:47:56.484097 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-04-13 00:47:56.484107 | orchestrator | Sunday 13 April 2025 00:47:18 +0000 (0:00:05.701) 0:00:34.660 ********** 2025-04-13 00:47:56.484121 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:47:56.484131 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:47:56.484141 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:47:56.484150 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:47:56.484159 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:47:56.484168 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:47:56.484178 | orchestrator | 2025-04-13 00:47:56.484187 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-04-13 00:47:56.484197 | orchestrator | Sunday 13 April 2025 00:47:20 +0000 (0:00:01.696) 0:00:36.356 ********** 2025-04-13 00:47:56.484206 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:47:56.484216 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:47:56.484232 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:47:56.484242 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:47:56.484252 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:47:56.484261 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:47:56.484270 | orchestrator | 2025-04-13 00:47:56.484280 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-04-13 00:47:56.484289 | orchestrator | Sunday 13 April 2025 00:47:30 +0000 (0:00:09.600) 0:00:45.956 ********** 2025-04-13 00:47:56.484303 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-04-13 00:47:56.484313 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-04-13 00:47:56.484323 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-04-13 00:47:56.484336 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-04-13 00:47:56.484346 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-04-13 00:47:56.484355 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-04-13 00:47:56.484364 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-04-13 00:47:56.484374 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-04-13 00:47:56.484383 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-04-13 00:47:56.484392 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-04-13 00:47:56.484401 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-04-13 00:47:56.484412 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-04-13 00:47:56.484422 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-13 00:47:56.484433 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-13 00:47:56.484443 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-13 00:47:56.484453 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-13 00:47:56.484463 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-13 00:47:56.484474 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-13 00:47:56.484484 | orchestrator | 2025-04-13 00:47:56.484494 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-04-13 00:47:56.484546 | orchestrator | Sunday 13 April 2025 00:47:38 +0000 (0:00:08.077) 0:00:54.034 ********** 2025-04-13 00:47:56.484558 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-04-13 00:47:56.484568 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:47:56.484580 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-04-13 00:47:56.484590 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:47:56.484600 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-04-13 00:47:56.484611 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:47:56.484621 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-04-13 00:47:56.484632 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-04-13 00:47:56.484643 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-04-13 00:47:56.484653 | orchestrator | 2025-04-13 00:47:56.484663 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-04-13 00:47:56.484674 | orchestrator | Sunday 13 April 2025 00:47:41 +0000 (0:00:03.176) 0:00:57.210 ********** 2025-04-13 00:47:56.484685 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-04-13 00:47:56.484694 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:47:56.484711 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-04-13 00:47:56.484720 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:47:56.484730 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-04-13 00:47:56.484739 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:47:56.484748 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-04-13 00:47:56.484763 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-04-13 00:47:59.539283 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-04-13 00:47:59.539393 | orchestrator | 2025-04-13 00:47:59.539409 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-04-13 00:47:59.539421 | orchestrator | Sunday 13 April 2025 00:47:45 +0000 (0:00:04.227) 0:01:01.437 ********** 2025-04-13 00:47:59.539433 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:47:59.539445 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:47:59.539456 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:47:59.539571 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:47:59.539589 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:47:59.539601 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:47:59.539612 | orchestrator | 2025-04-13 00:47:59.539623 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:47:59.539635 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-13 00:47:59.539649 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-13 00:47:59.539660 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-13 00:47:59.539671 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-13 00:47:59.539682 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-13 00:47:59.539712 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-13 00:47:59.539723 | orchestrator | 2025-04-13 00:47:59.539734 | orchestrator | 2025-04-13 00:47:59.539745 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 00:47:59.539763 | orchestrator | Sunday 13 April 2025 00:47:54 +0000 (0:00:08.640) 0:01:10.078 ********** 2025-04-13 00:47:59.539782 | orchestrator | =============================================================================== 2025-04-13 00:47:59.539799 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.24s 2025-04-13 00:47:59.539811 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.07s 2025-04-13 00:47:59.539824 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 5.70s 2025-04-13 00:47:59.539836 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.23s 2025-04-13 00:47:59.539849 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 3.18s 2025-04-13 00:47:59.539861 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.18s 2025-04-13 00:47:59.539873 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 3.04s 2025-04-13 00:47:59.539887 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.93s 2025-04-13 00:47:59.539899 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.86s 2025-04-13 00:47:59.539911 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.79s 2025-04-13 00:47:59.539929 | orchestrator | module-load : Load modules ---------------------------------------------- 2.40s 2025-04-13 00:47:59.539962 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.29s 2025-04-13 00:47:59.539974 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.18s 2025-04-13 00:47:59.539986 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.70s 2025-04-13 00:47:59.540010 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.63s 2025-04-13 00:47:59.540035 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.39s 2025-04-13 00:47:59.540048 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.21s 2025-04-13 00:47:59.540060 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.11s 2025-04-13 00:47:59.540073 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2025-04-13 00:47:59.540085 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.72s 2025-04-13 00:47:59.540105 | orchestrator | 2025-04-13 00:47:56 | INFO  | Task 1be7c678-7542-4f49-9da0-61fcf11a5b2f is in state SUCCESS 2025-04-13 00:47:59.540127 | orchestrator | 2025-04-13 00:47:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:47:59.540157 | orchestrator | 2025-04-13 00:47:59 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:47:59.540255 | orchestrator | 2025-04-13 00:47:59 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:47:59.541312 | orchestrator | 2025-04-13 00:47:59 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:47:59.542124 | orchestrator | 2025-04-13 00:47:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:47:59.542834 | orchestrator | 2025-04-13 00:47:59 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:48:02.574344 | orchestrator | 2025-04-13 00:47:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:48:02.574569 | orchestrator | 2025-04-13 00:48:02 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:48:02.575100 | orchestrator | 2025-04-13 00:48:02 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:48:02.575298 | orchestrator | 2025-04-13 00:48:02 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:48:02.575424 | orchestrator | 2025-04-13 00:48:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:48:02.575876 | orchestrator | 2025-04-13 00:48:02 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:48:05.607258 | orchestrator | 2025-04-13 00:48:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:48:05.607418 | orchestrator | 2025-04-13 00:48:05 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:48:05.608542 | orchestrator | 2025-04-13 00:48:05 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:48:05.608883 | orchestrator | 2025-04-13 00:48:05 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:48:05.608917 | orchestrator | 2025-04-13 00:48:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:48:05.609513 | orchestrator | 2025-04-13 00:48:05 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:48:08.665875 | orchestrator | 2025-04-13 00:48:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:48:08.666070 | orchestrator | 2025-04-13 00:48:08 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:48:08.667774 | orchestrator | 2025-04-13 00:48:08 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:48:08.671418 | orchestrator | 2025-04-13 00:48:08 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:48:08.673089 | orchestrator | 2025-04-13 00:48:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:48:08.677441 | orchestrator | 2025-04-13 00:48:08 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:48:11.722599 | orchestrator | 2025-04-13 00:48:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:48:11.722741 | orchestrator | 2025-04-13 00:48:11 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:48:11.727683 | orchestrator | 2025-04-13 00:48:11 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:48:11.728833 | orchestrator | 2025-04-13 00:48:11 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:48:11.728959 | orchestrator | 2025-04-13 00:48:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:48:11.729397 | orchestrator | 2025-04-13 00:48:11 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:48:14.777441 | orchestrator | 2025-04-13 00:48:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:48:14.777641 | orchestrator | 2025-04-13 00:48:14 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:48:14.781549 | orchestrator | 2025-04-13 00:48:14 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:48:17.817289 | orchestrator | 2025-04-13 00:48:14 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:48:17.817408 | orchestrator | 2025-04-13 00:48:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:48:17.817428 | orchestrator | 2025-04-13 00:48:14 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:48:17.817444 | orchestrator | 2025-04-13 00:48:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:48:17.817531 | orchestrator | 2025-04-13 00:48:17 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:48:17.818077 | orchestrator | 2025-04-13 00:48:17 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:48:17.819278 | orchestrator | 2025-04-13 00:48:17 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:48:17.820366 | orchestrator | 2025-04-13 00:48:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:48:17.821439 | orchestrator | 2025-04-13 00:48:17 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:48:17.821675 | orchestrator | 2025-04-13 00:48:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:48:20.871957 | orchestrator | 2025-04-13 00:48:20 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:48:20.874282 | orchestrator | 2025-04-13 00:48:20 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:48:20.876360 | orchestrator | 2025-04-13 00:48:20 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:48:20.878143 | orchestrator | 2025-04-13 00:48:20 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:48:20.880254 | orchestrator | 2025-04-13 00:48:20 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:48:20.880545 | orchestrator | 2025-04-13 00:48:20 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:48:23.919722 | orchestrator | 2025-04-13 00:48:23 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:48:23.921694 | orchestrator | 2025-04-13 00:48:23 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:48:23.923662 | orchestrator | 2025-04-13 00:48:23 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:48:23.926215 | orchestrator | 2025-04-13 00:48:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:48:23.927816 | orchestrator | 2025-04-13 00:48:23 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:48:26.975618 | orchestrator | 2025-04-13 00:48:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:48:26.975776 | orchestrator | 2025-04-13 00:48:26 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:48:26.982562 | orchestrator | 2025-04-13 00:48:26 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:48:26.987002 | orchestrator | 2025-04-13 00:48:26 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:48:26.988097 | orchestrator | 2025-04-13 00:48:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:48:26.990228 | orchestrator | 2025-04-13 00:48:26 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:48:26.990376 | orchestrator | 2025-04-13 00:48:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:48:30.026282 | orchestrator | 2025-04-13 00:48:30 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:48:30.027027 | orchestrator | 2025-04-13 00:48:30 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:48:30.029655 | orchestrator | 2025-04-13 00:48:30 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:48:30.030838 | orchestrator | 2025-04-13 00:48:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:48:30.031495 | orchestrator | 2025-04-13 00:48:30 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:48:33.075539 | orchestrator | 2025-04-13 00:48:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:48:33.075686 | orchestrator | 2025-04-13 00:48:33 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:48:33.079695 | orchestrator | 2025-04-13 00:48:33 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:48:33.081340 | orchestrator | 2025-04-13 00:48:33 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:48:33.083029 | orchestrator | 2025-04-13 00:48:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:48:33.085027 | orchestrator | 2025-04-13 00:48:33 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:48:33.085144 | orchestrator | 2025-04-13 00:48:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:48:36.129989 | orchestrator | 2025-04-13 00:48:36 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:48:36.130420 | orchestrator | 2025-04-13 00:48:36 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:48:36.130531 | orchestrator | 2025-04-13 00:48:36 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:48:36.130555 | orchestrator | 2025-04-13 00:48:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:48:36.134261 | orchestrator | 2025-04-13 00:48:36 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:48:39.171717 | orchestrator | 2025-04-13 00:48:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:48:39.171860 | orchestrator | 2025-04-13 00:48:39 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:48:39.173064 | orchestrator | 2025-04-13 00:48:39 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:48:39.175600 | orchestrator | 2025-04-13 00:48:39 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:48:39.175778 | orchestrator | 2025-04-13 00:48:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:48:39.176616 | orchestrator | 2025-04-13 00:48:39 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:48:39.176711 | orchestrator | 2025-04-13 00:48:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:48:42.229525 | orchestrator | 2025-04-13 00:48:42 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:48:42.230590 | orchestrator | 2025-04-13 00:48:42 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:48:42.231654 | orchestrator | 2025-04-13 00:48:42 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:48:42.233079 | orchestrator | 2025-04-13 00:48:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:48:42.233647 | orchestrator | 2025-04-13 00:48:42 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:48:45.280055 | orchestrator | 2025-04-13 00:48:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:48:45.280200 | orchestrator | 2025-04-13 00:48:45 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:48:45.280921 | orchestrator | 2025-04-13 00:48:45 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:48:45.282842 | orchestrator | 2025-04-13 00:48:45 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:48:45.284137 | orchestrator | 2025-04-13 00:48:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:48:45.286201 | orchestrator | 2025-04-13 00:48:45 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:48:48.338176 | orchestrator | 2025-04-13 00:48:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:48:48.338372 | orchestrator | 2025-04-13 00:48:48 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:48:48.338556 | orchestrator | 2025-04-13 00:48:48 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:48:48.342293 | orchestrator | 2025-04-13 00:48:48 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:48:48.342391 | orchestrator | 2025-04-13 00:48:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:48:48.342428 | orchestrator | 2025-04-13 00:48:48 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:48:48.342448 | orchestrator | 2025-04-13 00:48:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:48:51.396021 | orchestrator | 2025-04-13 00:48:51 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:48:51.398239 | orchestrator | 2025-04-13 00:48:51 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:48:51.400732 | orchestrator | 2025-04-13 00:48:51 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:48:51.400829 | orchestrator | 2025-04-13 00:48:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:48:51.401691 | orchestrator | 2025-04-13 00:48:51 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:48:54.444570 | orchestrator | 2025-04-13 00:48:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:48:54.444708 | orchestrator | 2025-04-13 00:48:54 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:48:54.449743 | orchestrator | 2025-04-13 00:48:54 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:48:54.451888 | orchestrator | 2025-04-13 00:48:54 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:48:54.453825 | orchestrator | 2025-04-13 00:48:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:48:54.455224 | orchestrator | 2025-04-13 00:48:54 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:48:54.455482 | orchestrator | 2025-04-13 00:48:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:48:57.497071 | orchestrator | 2025-04-13 00:48:57 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:48:57.497311 | orchestrator | 2025-04-13 00:48:57 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:48:57.498147 | orchestrator | 2025-04-13 00:48:57 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:48:57.498954 | orchestrator | 2025-04-13 00:48:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:48:57.499811 | orchestrator | 2025-04-13 00:48:57 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:48:57.499896 | orchestrator | 2025-04-13 00:48:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:49:00.535938 | orchestrator | 2025-04-13 00:49:00 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:49:00.538544 | orchestrator | 2025-04-13 00:49:00 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:49:00.544082 | orchestrator | 2025-04-13 00:49:00 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:49:00.551158 | orchestrator | 2025-04-13 00:49:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:49:00.552235 | orchestrator | 2025-04-13 00:49:00 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:49:03.586671 | orchestrator | 2025-04-13 00:49:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:49:03.586826 | orchestrator | 2025-04-13 00:49:03 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:49:03.587019 | orchestrator | 2025-04-13 00:49:03 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:49:03.587927 | orchestrator | 2025-04-13 00:49:03 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:49:03.589991 | orchestrator | 2025-04-13 00:49:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:49:03.590809 | orchestrator | 2025-04-13 00:49:03 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:49:06.629258 | orchestrator | 2025-04-13 00:49:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:49:06.629586 | orchestrator | 2025-04-13 00:49:06 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:49:06.629728 | orchestrator | 2025-04-13 00:49:06 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:49:06.630726 | orchestrator | 2025-04-13 00:49:06 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:49:06.634153 | orchestrator | 2025-04-13 00:49:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:49:06.635409 | orchestrator | 2025-04-13 00:49:06 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:49:09.687560 | orchestrator | 2025-04-13 00:49:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:49:09.687662 | orchestrator | 2025-04-13 00:49:09 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:49:09.689361 | orchestrator | 2025-04-13 00:49:09 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:49:09.689536 | orchestrator | 2025-04-13 00:49:09 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:49:09.689551 | orchestrator | 2025-04-13 00:49:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:49:09.691408 | orchestrator | 2025-04-13 00:49:09 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:49:12.738456 | orchestrator | 2025-04-13 00:49:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:49:12.738622 | orchestrator | 2025-04-13 00:49:12 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:49:12.739225 | orchestrator | 2025-04-13 00:49:12 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:49:12.739259 | orchestrator | 2025-04-13 00:49:12 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:49:12.740167 | orchestrator | 2025-04-13 00:49:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:49:12.741021 | orchestrator | 2025-04-13 00:49:12 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:49:15.783011 | orchestrator | 2025-04-13 00:49:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:49:15.783150 | orchestrator | 2025-04-13 00:49:15 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:49:15.783314 | orchestrator | 2025-04-13 00:49:15 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:49:15.784128 | orchestrator | 2025-04-13 00:49:15 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:49:15.784877 | orchestrator | 2025-04-13 00:49:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:49:15.785630 | orchestrator | 2025-04-13 00:49:15 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:49:15.785729 | orchestrator | 2025-04-13 00:49:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:49:18.826106 | orchestrator | 2025-04-13 00:49:18 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:49:18.826497 | orchestrator | 2025-04-13 00:49:18 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:49:18.827301 | orchestrator | 2025-04-13 00:49:18 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:49:18.828828 | orchestrator | 2025-04-13 00:49:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:49:18.829632 | orchestrator | 2025-04-13 00:49:18 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:49:21.857535 | orchestrator | 2025-04-13 00:49:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:49:21.857678 | orchestrator | 2025-04-13 00:49:21 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:49:21.858664 | orchestrator | 2025-04-13 00:49:21 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:49:21.861042 | orchestrator | 2025-04-13 00:49:21 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:49:21.864436 | orchestrator | 2025-04-13 00:49:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:49:21.866661 | orchestrator | 2025-04-13 00:49:21 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:49:21.866829 | orchestrator | 2025-04-13 00:49:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:49:24.917417 | orchestrator | 2025-04-13 00:49:24 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:49:24.919560 | orchestrator | 2025-04-13 00:49:24 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state STARTED 2025-04-13 00:49:24.921648 | orchestrator | 2025-04-13 00:49:24 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:49:24.922456 | orchestrator | 2025-04-13 00:49:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:49:24.923014 | orchestrator | 2025-04-13 00:49:24 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:49:27.962010 | orchestrator | 2025-04-13 00:49:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:49:27.962204 | orchestrator | 2025-04-13 00:49:27 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:49:27.963606 | orchestrator | 2025-04-13 00:49:27 | INFO  | Task 9310aa6b-4ba2-4296-9628-ea89c326af03 is in state SUCCESS 2025-04-13 00:49:27.964701 | orchestrator | 2025-04-13 00:49:27.964737 | orchestrator | 2025-04-13 00:49:27.964751 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-04-13 00:49:27.964766 | orchestrator | 2025-04-13 00:49:27.964780 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-04-13 00:49:27.964794 | orchestrator | Sunday 13 April 2025 00:47:09 +0000 (0:00:00.178) 0:00:00.178 ********** 2025-04-13 00:49:27.964808 | orchestrator | ok: [localhost] => { 2025-04-13 00:49:27.964825 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-04-13 00:49:27.964839 | orchestrator | } 2025-04-13 00:49:27.964853 | orchestrator | 2025-04-13 00:49:27.964867 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-04-13 00:49:27.964881 | orchestrator | Sunday 13 April 2025 00:47:09 +0000 (0:00:00.049) 0:00:00.228 ********** 2025-04-13 00:49:27.964896 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-04-13 00:49:27.964911 | orchestrator | ...ignoring 2025-04-13 00:49:27.964925 | orchestrator | 2025-04-13 00:49:27.964939 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-04-13 00:49:27.964953 | orchestrator | Sunday 13 April 2025 00:47:11 +0000 (0:00:02.674) 0:00:02.903 ********** 2025-04-13 00:49:27.964966 | orchestrator | skipping: [localhost] 2025-04-13 00:49:27.964980 | orchestrator | 2025-04-13 00:49:27.964994 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-04-13 00:49:27.965008 | orchestrator | Sunday 13 April 2025 00:47:11 +0000 (0:00:00.065) 0:00:02.969 ********** 2025-04-13 00:49:27.965021 | orchestrator | ok: [localhost] 2025-04-13 00:49:27.965036 | orchestrator | 2025-04-13 00:49:27.965075 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 00:49:27.965089 | orchestrator | 2025-04-13 00:49:27.965103 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 00:49:27.965117 | orchestrator | Sunday 13 April 2025 00:47:12 +0000 (0:00:00.209) 0:00:03.179 ********** 2025-04-13 00:49:27.965130 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:49:27.965144 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:49:27.965158 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:49:27.965172 | orchestrator | 2025-04-13 00:49:27.965186 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 00:49:27.965199 | orchestrator | Sunday 13 April 2025 00:47:12 +0000 (0:00:00.420) 0:00:03.600 ********** 2025-04-13 00:49:27.965213 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-04-13 00:49:27.965227 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-04-13 00:49:27.965241 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-04-13 00:49:27.965255 | orchestrator | 2025-04-13 00:49:27.965269 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-04-13 00:49:27.965282 | orchestrator | 2025-04-13 00:49:27.965296 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-13 00:49:27.965310 | orchestrator | Sunday 13 April 2025 00:47:12 +0000 (0:00:00.411) 0:00:04.011 ********** 2025-04-13 00:49:27.965326 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:49:27.965343 | orchestrator | 2025-04-13 00:49:27.965381 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-04-13 00:49:27.965398 | orchestrator | Sunday 13 April 2025 00:47:15 +0000 (0:00:02.226) 0:00:06.237 ********** 2025-04-13 00:49:27.965414 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:49:27.965429 | orchestrator | 2025-04-13 00:49:27.965445 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-04-13 00:49:27.965460 | orchestrator | Sunday 13 April 2025 00:47:16 +0000 (0:00:01.364) 0:00:07.601 ********** 2025-04-13 00:49:27.965475 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:49:27.965492 | orchestrator | 2025-04-13 00:49:27.965507 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-04-13 00:49:27.965538 | orchestrator | Sunday 13 April 2025 00:47:16 +0000 (0:00:00.407) 0:00:08.009 ********** 2025-04-13 00:49:27.965554 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:49:27.965570 | orchestrator | 2025-04-13 00:49:27.965585 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-04-13 00:49:27.965600 | orchestrator | Sunday 13 April 2025 00:47:17 +0000 (0:00:00.724) 0:00:08.733 ********** 2025-04-13 00:49:27.965615 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:49:27.965630 | orchestrator | 2025-04-13 00:49:27.965646 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-04-13 00:49:27.965661 | orchestrator | Sunday 13 April 2025 00:47:17 +0000 (0:00:00.392) 0:00:09.126 ********** 2025-04-13 00:49:27.965676 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:49:27.965690 | orchestrator | 2025-04-13 00:49:27.965704 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-13 00:49:27.965718 | orchestrator | Sunday 13 April 2025 00:47:18 +0000 (0:00:00.402) 0:00:09.529 ********** 2025-04-13 00:49:27.965732 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:49:27.965746 | orchestrator | 2025-04-13 00:49:27.965760 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-04-13 00:49:27.965774 | orchestrator | Sunday 13 April 2025 00:47:19 +0000 (0:00:01.203) 0:00:10.733 ********** 2025-04-13 00:49:27.965787 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:49:27.965801 | orchestrator | 2025-04-13 00:49:27.965815 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-04-13 00:49:27.965829 | orchestrator | Sunday 13 April 2025 00:47:20 +0000 (0:00:01.124) 0:00:11.857 ********** 2025-04-13 00:49:27.965851 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:49:27.965866 | orchestrator | 2025-04-13 00:49:27.965880 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-04-13 00:49:27.965894 | orchestrator | Sunday 13 April 2025 00:47:21 +0000 (0:00:00.824) 0:00:12.682 ********** 2025-04-13 00:49:27.965908 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:49:27.965922 | orchestrator | 2025-04-13 00:49:27.965943 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-04-13 00:49:27.965958 | orchestrator | Sunday 13 April 2025 00:47:23 +0000 (0:00:01.504) 0:00:14.187 ********** 2025-04-13 00:49:27.966066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-13 00:49:27.966100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-13 00:49:27.966116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-13 00:49:27.966130 | orchestrator | 2025-04-13 00:49:27.966144 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-04-13 00:49:27.966168 | orchestrator | Sunday 13 April 2025 00:47:24 +0000 (0:00:01.186) 0:00:15.374 ********** 2025-04-13 00:49:27.966196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-13 00:49:27.966225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-13 00:49:27.966240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-13 00:49:27.966255 | orchestrator | 2025-04-13 00:49:27.966269 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-04-13 00:49:27.966283 | orchestrator | Sunday 13 April 2025 00:47:25 +0000 (0:00:01.510) 0:00:16.884 ********** 2025-04-13 00:49:27.966297 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-13 00:49:27.966311 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-13 00:49:27.966325 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-13 00:49:27.966345 | orchestrator | 2025-04-13 00:49:27.966392 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-04-13 00:49:27.966408 | orchestrator | Sunday 13 April 2025 00:47:27 +0000 (0:00:01.883) 0:00:18.767 ********** 2025-04-13 00:49:27.966422 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-13 00:49:27.966436 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-13 00:49:27.966450 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-13 00:49:27.966464 | orchestrator | 2025-04-13 00:49:27.966478 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-04-13 00:49:27.966492 | orchestrator | Sunday 13 April 2025 00:47:30 +0000 (0:00:02.398) 0:00:21.166 ********** 2025-04-13 00:49:27.966505 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-13 00:49:27.966519 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-13 00:49:27.966533 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-13 00:49:27.966546 | orchestrator | 2025-04-13 00:49:27.966568 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-04-13 00:49:27.966582 | orchestrator | Sunday 13 April 2025 00:47:32 +0000 (0:00:02.459) 0:00:23.626 ********** 2025-04-13 00:49:27.966596 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-13 00:49:27.966610 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-13 00:49:27.966624 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-13 00:49:27.966638 | orchestrator | 2025-04-13 00:49:27.966652 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-04-13 00:49:27.966665 | orchestrator | Sunday 13 April 2025 00:47:34 +0000 (0:00:01.755) 0:00:25.382 ********** 2025-04-13 00:49:27.966679 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-13 00:49:27.966693 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-13 00:49:27.966707 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-13 00:49:27.966721 | orchestrator | 2025-04-13 00:49:27.966735 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-04-13 00:49:27.966754 | orchestrator | Sunday 13 April 2025 00:47:35 +0000 (0:00:01.518) 0:00:26.900 ********** 2025-04-13 00:49:27.966839 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-13 00:49:27.966860 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-13 00:49:27.966874 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-13 00:49:27.966888 | orchestrator | 2025-04-13 00:49:27.966902 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-13 00:49:27.966916 | orchestrator | Sunday 13 April 2025 00:47:37 +0000 (0:00:01.733) 0:00:28.634 ********** 2025-04-13 00:49:27.966929 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:49:27.966943 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:49:27.966957 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:49:27.966971 | orchestrator | 2025-04-13 00:49:27.966985 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-04-13 00:49:27.966999 | orchestrator | Sunday 13 April 2025 00:47:38 +0000 (0:00:00.645) 0:00:29.280 ********** 2025-04-13 00:49:27.967014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-13 00:49:27.967038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-13 00:49:27.967063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-13 00:49:27.967078 | orchestrator | 2025-04-13 00:49:27.967093 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-04-13 00:49:27.967135 | orchestrator | Sunday 13 April 2025 00:47:39 +0000 (0:00:01.614) 0:00:30.895 ********** 2025-04-13 00:49:27.967150 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:49:27.967164 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:49:27.967178 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:49:27.967191 | orchestrator | 2025-04-13 00:49:27.967205 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-04-13 00:49:27.967219 | orchestrator | Sunday 13 April 2025 00:47:40 +0000 (0:00:00.892) 0:00:31.788 ********** 2025-04-13 00:49:27.967233 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:49:27.967247 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:49:27.967271 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:49:27.967285 | orchestrator | 2025-04-13 00:49:27.967299 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-04-13 00:49:27.967320 | orchestrator | Sunday 13 April 2025 00:47:47 +0000 (0:00:06.901) 0:00:38.689 ********** 2025-04-13 00:49:27.967334 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:49:27.967348 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:49:27.967420 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:49:27.967435 | orchestrator | 2025-04-13 00:49:27.967449 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-13 00:49:27.967462 | orchestrator | 2025-04-13 00:49:27.967476 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-13 00:49:27.967490 | orchestrator | Sunday 13 April 2025 00:47:48 +0000 (0:00:00.742) 0:00:39.432 ********** 2025-04-13 00:49:27.967504 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:49:27.967518 | orchestrator | 2025-04-13 00:49:27.967531 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-13 00:49:27.967545 | orchestrator | Sunday 13 April 2025 00:47:49 +0000 (0:00:00.844) 0:00:40.277 ********** 2025-04-13 00:49:27.967559 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:49:27.967572 | orchestrator | 2025-04-13 00:49:27.967586 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-13 00:49:27.967600 | orchestrator | Sunday 13 April 2025 00:47:49 +0000 (0:00:00.304) 0:00:40.581 ********** 2025-04-13 00:49:27.967614 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:49:27.967628 | orchestrator | 2025-04-13 00:49:27.967640 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-13 00:49:27.967652 | orchestrator | Sunday 13 April 2025 00:47:56 +0000 (0:00:06.979) 0:00:47.561 ********** 2025-04-13 00:49:27.967664 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:49:27.967676 | orchestrator | 2025-04-13 00:49:27.967688 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-13 00:49:27.967700 | orchestrator | 2025-04-13 00:49:27.967712 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-13 00:49:27.967724 | orchestrator | Sunday 13 April 2025 00:48:46 +0000 (0:00:49.700) 0:01:37.261 ********** 2025-04-13 00:49:27.967737 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:49:27.967749 | orchestrator | 2025-04-13 00:49:27.967761 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-13 00:49:27.967773 | orchestrator | Sunday 13 April 2025 00:48:46 +0000 (0:00:00.568) 0:01:37.829 ********** 2025-04-13 00:49:27.967785 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:49:27.967797 | orchestrator | 2025-04-13 00:49:27.967809 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-13 00:49:27.967821 | orchestrator | Sunday 13 April 2025 00:48:46 +0000 (0:00:00.216) 0:01:38.045 ********** 2025-04-13 00:49:27.967833 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:49:27.967845 | orchestrator | 2025-04-13 00:49:27.967857 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-13 00:49:27.967870 | orchestrator | Sunday 13 April 2025 00:48:49 +0000 (0:00:02.133) 0:01:40.179 ********** 2025-04-13 00:49:27.967882 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:49:27.967894 | orchestrator | 2025-04-13 00:49:27.967906 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-13 00:49:27.967919 | orchestrator | 2025-04-13 00:49:27.967931 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-13 00:49:27.967943 | orchestrator | Sunday 13 April 2025 00:49:03 +0000 (0:00:14.684) 0:01:54.863 ********** 2025-04-13 00:49:27.967955 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:49:27.967973 | orchestrator | 2025-04-13 00:49:27.967990 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-13 00:49:27.968002 | orchestrator | Sunday 13 April 2025 00:49:04 +0000 (0:00:00.705) 0:01:55.569 ********** 2025-04-13 00:49:27.968014 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:49:27.968026 | orchestrator | 2025-04-13 00:49:27.968039 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-13 00:49:27.968058 | orchestrator | Sunday 13 April 2025 00:49:04 +0000 (0:00:00.397) 0:01:55.967 ********** 2025-04-13 00:49:27.968078 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:49:27.968090 | orchestrator | 2025-04-13 00:49:27.968102 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-13 00:49:27.968115 | orchestrator | Sunday 13 April 2025 00:49:07 +0000 (0:00:02.688) 0:01:58.655 ********** 2025-04-13 00:49:27.968127 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:49:27.968139 | orchestrator | 2025-04-13 00:49:27.968151 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-04-13 00:49:27.968163 | orchestrator | 2025-04-13 00:49:27.968175 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-04-13 00:49:27.968187 | orchestrator | Sunday 13 April 2025 00:49:22 +0000 (0:00:14.848) 0:02:13.504 ********** 2025-04-13 00:49:27.968199 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:49:27.968212 | orchestrator | 2025-04-13 00:49:27.968224 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-04-13 00:49:27.968236 | orchestrator | Sunday 13 April 2025 00:49:23 +0000 (0:00:00.683) 0:02:14.188 ********** 2025-04-13 00:49:27.968248 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-04-13 00:49:27.968260 | orchestrator | enable_outward_rabbitmq_True 2025-04-13 00:49:27.968273 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-04-13 00:49:27.968285 | orchestrator | outward_rabbitmq_restart 2025-04-13 00:49:27.968297 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:49:27.968309 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:49:27.968322 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:49:27.968334 | orchestrator | 2025-04-13 00:49:27.968347 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-04-13 00:49:27.968378 | orchestrator | skipping: no hosts matched 2025-04-13 00:49:27.968390 | orchestrator | 2025-04-13 00:49:27.968403 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-04-13 00:49:27.968415 | orchestrator | skipping: no hosts matched 2025-04-13 00:49:27.968427 | orchestrator | 2025-04-13 00:49:27.968439 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-04-13 00:49:27.968451 | orchestrator | skipping: no hosts matched 2025-04-13 00:49:27.968463 | orchestrator | 2025-04-13 00:49:27.968475 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:49:27.968489 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-04-13 00:49:27.968502 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-04-13 00:49:27.968514 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:49:27.968527 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 00:49:27.968539 | orchestrator | 2025-04-13 00:49:27.968551 | orchestrator | 2025-04-13 00:49:27.968564 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 00:49:27.968576 | orchestrator | Sunday 13 April 2025 00:49:25 +0000 (0:00:02.344) 0:02:16.532 ********** 2025-04-13 00:49:27.968588 | orchestrator | =============================================================================== 2025-04-13 00:49:27.968601 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 79.23s 2025-04-13 00:49:27.968613 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 11.80s 2025-04-13 00:49:27.968625 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.90s 2025-04-13 00:49:27.968637 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.67s 2025-04-13 00:49:27.968650 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.46s 2025-04-13 00:49:27.968668 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.40s 2025-04-13 00:49:27.968680 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.34s 2025-04-13 00:49:27.968693 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.23s 2025-04-13 00:49:27.968705 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.12s 2025-04-13 00:49:27.968717 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.88s 2025-04-13 00:49:27.968729 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.76s 2025-04-13 00:49:27.968741 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.73s 2025-04-13 00:49:27.968753 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.61s 2025-04-13 00:49:27.968770 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.52s 2025-04-13 00:49:27.968782 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.51s 2025-04-13 00:49:27.968795 | orchestrator | rabbitmq : Remove ha-all policy from RabbitMQ --------------------------- 1.50s 2025-04-13 00:49:27.968807 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.36s 2025-04-13 00:49:27.968819 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.20s 2025-04-13 00:49:27.968831 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.19s 2025-04-13 00:49:27.968843 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.12s 2025-04-13 00:49:27.968861 | orchestrator | 2025-04-13 00:49:27 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:49:27.968981 | orchestrator | 2025-04-13 00:49:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:49:27.969002 | orchestrator | 2025-04-13 00:49:27 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:49:31.025797 | orchestrator | 2025-04-13 00:49:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:49:31.025943 | orchestrator | 2025-04-13 00:49:31 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:49:31.026952 | orchestrator | 2025-04-13 00:49:31 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:49:31.036504 | orchestrator | 2025-04-13 00:49:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:49:31.046395 | orchestrator | 2025-04-13 00:49:31 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:49:34.104164 | orchestrator | 2025-04-13 00:49:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:49:34.104430 | orchestrator | 2025-04-13 00:49:34 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:49:34.106242 | orchestrator | 2025-04-13 00:49:34 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:49:34.108238 | orchestrator | 2025-04-13 00:49:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:49:34.110741 | orchestrator | 2025-04-13 00:49:34 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:49:34.110863 | orchestrator | 2025-04-13 00:49:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:49:37.156985 | orchestrator | 2025-04-13 00:49:37 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:49:37.161582 | orchestrator | 2025-04-13 00:49:37 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:49:37.162760 | orchestrator | 2025-04-13 00:49:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:49:37.164203 | orchestrator | 2025-04-13 00:49:37 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:49:40.214322 | orchestrator | 2025-04-13 00:49:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:49:40.214524 | orchestrator | 2025-04-13 00:49:40 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:49:40.214794 | orchestrator | 2025-04-13 00:49:40 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:49:40.215733 | orchestrator | 2025-04-13 00:49:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:49:40.216475 | orchestrator | 2025-04-13 00:49:40 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:49:43.269901 | orchestrator | 2025-04-13 00:49:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:49:43.270107 | orchestrator | 2025-04-13 00:49:43 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:49:43.272530 | orchestrator | 2025-04-13 00:49:43 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:49:43.274753 | orchestrator | 2025-04-13 00:49:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:49:43.276712 | orchestrator | 2025-04-13 00:49:43 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:49:46.337511 | orchestrator | 2025-04-13 00:49:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:49:46.337652 | orchestrator | 2025-04-13 00:49:46 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:49:46.343728 | orchestrator | 2025-04-13 00:49:46 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:49:46.348021 | orchestrator | 2025-04-13 00:49:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:49:46.354776 | orchestrator | 2025-04-13 00:49:46 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:49:49.401270 | orchestrator | 2025-04-13 00:49:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:49:49.401473 | orchestrator | 2025-04-13 00:49:49 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:49:49.405491 | orchestrator | 2025-04-13 00:49:49 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:49:49.405611 | orchestrator | 2025-04-13 00:49:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:49:49.406907 | orchestrator | 2025-04-13 00:49:49 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:49:52.465437 | orchestrator | 2025-04-13 00:49:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:49:52.465633 | orchestrator | 2025-04-13 00:49:52 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:49:52.465968 | orchestrator | 2025-04-13 00:49:52 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:49:52.469742 | orchestrator | 2025-04-13 00:49:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:49:52.470785 | orchestrator | 2025-04-13 00:49:52 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:49:55.509858 | orchestrator | 2025-04-13 00:49:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:49:55.509995 | orchestrator | 2025-04-13 00:49:55 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:49:55.511615 | orchestrator | 2025-04-13 00:49:55 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:49:55.512030 | orchestrator | 2025-04-13 00:49:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:49:55.513119 | orchestrator | 2025-04-13 00:49:55 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:49:58.555441 | orchestrator | 2025-04-13 00:49:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:49:58.555660 | orchestrator | 2025-04-13 00:49:58 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:49:58.555759 | orchestrator | 2025-04-13 00:49:58 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:49:58.556654 | orchestrator | 2025-04-13 00:49:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:49:58.557437 | orchestrator | 2025-04-13 00:49:58 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:50:01.604599 | orchestrator | 2025-04-13 00:49:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:50:01.604786 | orchestrator | 2025-04-13 00:50:01 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:50:01.604871 | orchestrator | 2025-04-13 00:50:01 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:50:01.604890 | orchestrator | 2025-04-13 00:50:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:50:01.604909 | orchestrator | 2025-04-13 00:50:01 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:50:04.643247 | orchestrator | 2025-04-13 00:50:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:50:04.643382 | orchestrator | 2025-04-13 00:50:04 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:50:04.644524 | orchestrator | 2025-04-13 00:50:04 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:50:04.644543 | orchestrator | 2025-04-13 00:50:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:50:04.645784 | orchestrator | 2025-04-13 00:50:04 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:50:07.696731 | orchestrator | 2025-04-13 00:50:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:50:07.696895 | orchestrator | 2025-04-13 00:50:07 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:50:07.698419 | orchestrator | 2025-04-13 00:50:07 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:50:07.700469 | orchestrator | 2025-04-13 00:50:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:50:07.702684 | orchestrator | 2025-04-13 00:50:07 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:50:10.754929 | orchestrator | 2025-04-13 00:50:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:50:10.755071 | orchestrator | 2025-04-13 00:50:10 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:50:10.755497 | orchestrator | 2025-04-13 00:50:10 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:50:10.755529 | orchestrator | 2025-04-13 00:50:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:50:10.755551 | orchestrator | 2025-04-13 00:50:10 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:50:10.755900 | orchestrator | 2025-04-13 00:50:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:50:13.816452 | orchestrator | 2025-04-13 00:50:13 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:50:13.817498 | orchestrator | 2025-04-13 00:50:13 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:50:13.820112 | orchestrator | 2025-04-13 00:50:13 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:50:13.821971 | orchestrator | 2025-04-13 00:50:13 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:50:16.876668 | orchestrator | 2025-04-13 00:50:13 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:50:16.876803 | orchestrator | 2025-04-13 00:50:16 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:50:16.877253 | orchestrator | 2025-04-13 00:50:16 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:50:16.877337 | orchestrator | 2025-04-13 00:50:16 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:50:16.878112 | orchestrator | 2025-04-13 00:50:16 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:50:19.923346 | orchestrator | 2025-04-13 00:50:16 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:50:19.923493 | orchestrator | 2025-04-13 00:50:19 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:50:19.925073 | orchestrator | 2025-04-13 00:50:19 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state STARTED 2025-04-13 00:50:19.926779 | orchestrator | 2025-04-13 00:50:19 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:50:19.928259 | orchestrator | 2025-04-13 00:50:19 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:50:19.928655 | orchestrator | 2025-04-13 00:50:19 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:50:22.977395 | orchestrator | 2025-04-13 00:50:22 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:50:22.979245 | orchestrator | 2025-04-13 00:50:22 | INFO  | Task 8d6d5cf3-c102-4bc1-b22f-25d7036dacc0 is in state SUCCESS 2025-04-13 00:50:22.981351 | orchestrator | 2025-04-13 00:50:22.981557 | orchestrator | 2025-04-13 00:50:22.981644 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 00:50:22.981667 | orchestrator | 2025-04-13 00:50:22.981692 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 00:50:22.981717 | orchestrator | Sunday 13 April 2025 00:47:59 +0000 (0:00:00.261) 0:00:00.261 ********** 2025-04-13 00:50:22.981740 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:50:22.981797 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:50:22.981816 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:50:22.981838 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:50:22.981861 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:50:22.981882 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:50:22.981904 | orchestrator | 2025-04-13 00:50:22.981925 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 00:50:22.981947 | orchestrator | Sunday 13 April 2025 00:47:59 +0000 (0:00:00.686) 0:00:00.948 ********** 2025-04-13 00:50:22.981968 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-04-13 00:50:22.981991 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-04-13 00:50:22.982068 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-04-13 00:50:22.982093 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-04-13 00:50:22.982114 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-04-13 00:50:22.982134 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-04-13 00:50:22.982156 | orchestrator | 2025-04-13 00:50:22.982202 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-04-13 00:50:22.982216 | orchestrator | 2025-04-13 00:50:22.982228 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-04-13 00:50:22.982241 | orchestrator | Sunday 13 April 2025 00:48:01 +0000 (0:00:01.884) 0:00:02.832 ********** 2025-04-13 00:50:22.982255 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:50:22.982311 | orchestrator | 2025-04-13 00:50:22.982330 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-04-13 00:50:22.982343 | orchestrator | Sunday 13 April 2025 00:48:03 +0000 (0:00:01.870) 0:00:04.703 ********** 2025-04-13 00:50:22.982357 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982372 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982385 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982469 | orchestrator | 2025-04-13 00:50:22.982481 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-04-13 00:50:22.982494 | orchestrator | Sunday 13 April 2025 00:48:04 +0000 (0:00:01.346) 0:00:06.049 ********** 2025-04-13 00:50:22.982522 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982544 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982557 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982608 | orchestrator | 2025-04-13 00:50:22.982620 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-04-13 00:50:22.982633 | orchestrator | Sunday 13 April 2025 00:48:07 +0000 (0:00:02.336) 0:00:08.385 ********** 2025-04-13 00:50:22.982645 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982658 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982682 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982740 | orchestrator | 2025-04-13 00:50:22.982753 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-04-13 00:50:22.982765 | orchestrator | Sunday 13 April 2025 00:48:08 +0000 (0:00:01.471) 0:00:09.856 ********** 2025-04-13 00:50:22.982777 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982790 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982803 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982871 | orchestrator | 2025-04-13 00:50:22.982883 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-04-13 00:50:22.982896 | orchestrator | Sunday 13 April 2025 00:48:10 +0000 (0:00:02.225) 0:00:12.082 ********** 2025-04-13 00:50:22.982908 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982921 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982933 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.982983 | orchestrator | 2025-04-13 00:50:22.982995 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-04-13 00:50:22.983008 | orchestrator | Sunday 13 April 2025 00:48:12 +0000 (0:00:02.119) 0:00:14.201 ********** 2025-04-13 00:50:22.983020 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:50:22.983034 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:50:22.983052 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:50:22.983065 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:50:22.983077 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:50:22.983089 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:50:22.983101 | orchestrator | 2025-04-13 00:50:22.983114 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-04-13 00:50:22.983126 | orchestrator | Sunday 13 April 2025 00:48:15 +0000 (0:00:02.984) 0:00:17.185 ********** 2025-04-13 00:50:22.983138 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-04-13 00:50:22.983151 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-04-13 00:50:22.983164 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-04-13 00:50:22.983181 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-04-13 00:50:22.983193 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-04-13 00:50:22.983206 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-04-13 00:50:22.983218 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-13 00:50:22.983230 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-13 00:50:22.983242 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-13 00:50:22.983259 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-13 00:50:22.983271 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-13 00:50:22.983284 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-13 00:50:22.983323 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-13 00:50:22.983338 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-13 00:50:22.983351 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-13 00:50:22.983363 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-13 00:50:22.983388 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-13 00:50:22.983411 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-13 00:50:22.983424 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-13 00:50:22.983437 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-13 00:50:22.983454 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-13 00:50:22.983467 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-13 00:50:22.983479 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-13 00:50:22.983491 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-13 00:50:22.983503 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-13 00:50:22.983515 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-13 00:50:22.983534 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-13 00:50:22.983546 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-13 00:50:22.983559 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-13 00:50:22.983571 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-13 00:50:22.983584 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-13 00:50:22.983596 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-13 00:50:22.983608 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-13 00:50:22.983621 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-13 00:50:22.983633 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-13 00:50:22.983645 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-13 00:50:22.983658 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-13 00:50:22.983670 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-13 00:50:22.983722 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-13 00:50:22.983735 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-13 00:50:22.983754 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-04-13 00:50:22.983768 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-13 00:50:22.983780 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-13 00:50:22.983792 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-04-13 00:50:22.983805 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-04-13 00:50:22.983817 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-04-13 00:50:22.983829 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-13 00:50:22.983842 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-04-13 00:50:22.983854 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-04-13 00:50:22.983866 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-13 00:50:22.983879 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-13 00:50:22.983891 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-13 00:50:22.983903 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-13 00:50:22.983916 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-13 00:50:22.983935 | orchestrator | 2025-04-13 00:50:22.983948 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-13 00:50:22.983961 | orchestrator | Sunday 13 April 2025 00:48:34 +0000 (0:00:18.776) 0:00:35.962 ********** 2025-04-13 00:50:22.983973 | orchestrator | 2025-04-13 00:50:22.983985 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-13 00:50:22.983998 | orchestrator | Sunday 13 April 2025 00:48:34 +0000 (0:00:00.054) 0:00:36.017 ********** 2025-04-13 00:50:22.984010 | orchestrator | 2025-04-13 00:50:22.984022 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-13 00:50:22.984035 | orchestrator | Sunday 13 April 2025 00:48:35 +0000 (0:00:00.206) 0:00:36.224 ********** 2025-04-13 00:50:22.984047 | orchestrator | 2025-04-13 00:50:22.984059 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-13 00:50:22.984072 | orchestrator | Sunday 13 April 2025 00:48:35 +0000 (0:00:00.058) 0:00:36.282 ********** 2025-04-13 00:50:22.984084 | orchestrator | 2025-04-13 00:50:22.984096 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-13 00:50:22.984108 | orchestrator | Sunday 13 April 2025 00:48:35 +0000 (0:00:00.067) 0:00:36.349 ********** 2025-04-13 00:50:22.984121 | orchestrator | 2025-04-13 00:50:22.984133 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-13 00:50:22.984145 | orchestrator | Sunday 13 April 2025 00:48:35 +0000 (0:00:00.053) 0:00:36.403 ********** 2025-04-13 00:50:22.984157 | orchestrator | 2025-04-13 00:50:22.984170 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-04-13 00:50:22.984182 | orchestrator | Sunday 13 April 2025 00:48:35 +0000 (0:00:00.250) 0:00:36.653 ********** 2025-04-13 00:50:22.984194 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:50:22.984207 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:50:22.984219 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:50:22.984232 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:50:22.984244 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:50:22.984256 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:50:22.984268 | orchestrator | 2025-04-13 00:50:22.984281 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-04-13 00:50:22.984344 | orchestrator | Sunday 13 April 2025 00:48:37 +0000 (0:00:02.392) 0:00:39.046 ********** 2025-04-13 00:50:22.984359 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:50:22.984371 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:50:22.984384 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:50:22.984397 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:50:22.984409 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:50:22.984422 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:50:22.984434 | orchestrator | 2025-04-13 00:50:22.984446 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-04-13 00:50:22.984459 | orchestrator | 2025-04-13 00:50:22.984471 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-13 00:50:22.984483 | orchestrator | Sunday 13 April 2025 00:48:57 +0000 (0:00:19.516) 0:00:58.562 ********** 2025-04-13 00:50:22.984496 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:50:22.984509 | orchestrator | 2025-04-13 00:50:22.984521 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-13 00:50:22.984533 | orchestrator | Sunday 13 April 2025 00:48:58 +0000 (0:00:00.677) 0:00:59.239 ********** 2025-04-13 00:50:22.984546 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:50:22.984558 | orchestrator | 2025-04-13 00:50:22.984576 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-04-13 00:50:22.984594 | orchestrator | Sunday 13 April 2025 00:48:58 +0000 (0:00:00.872) 0:01:00.112 ********** 2025-04-13 00:50:22.984607 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:50:22.984620 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:50:22.984657 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:50:22.984671 | orchestrator | 2025-04-13 00:50:22.984683 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-04-13 00:50:22.984695 | orchestrator | Sunday 13 April 2025 00:48:59 +0000 (0:00:00.917) 0:01:01.030 ********** 2025-04-13 00:50:22.984708 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:50:22.984720 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:50:22.984732 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:50:22.984744 | orchestrator | 2025-04-13 00:50:22.984757 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-04-13 00:50:22.984769 | orchestrator | Sunday 13 April 2025 00:49:00 +0000 (0:00:00.311) 0:01:01.341 ********** 2025-04-13 00:50:22.984781 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:50:22.984794 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:50:22.984806 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:50:22.984818 | orchestrator | 2025-04-13 00:50:22.984830 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-04-13 00:50:22.984843 | orchestrator | Sunday 13 April 2025 00:49:00 +0000 (0:00:00.505) 0:01:01.847 ********** 2025-04-13 00:50:22.984855 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:50:22.984865 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:50:22.984875 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:50:22.984885 | orchestrator | 2025-04-13 00:50:22.984895 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-04-13 00:50:22.984905 | orchestrator | Sunday 13 April 2025 00:49:01 +0000 (0:00:00.482) 0:01:02.330 ********** 2025-04-13 00:50:22.984915 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:50:22.984925 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:50:22.984935 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:50:22.984945 | orchestrator | 2025-04-13 00:50:22.984955 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-04-13 00:50:22.984965 | orchestrator | Sunday 13 April 2025 00:49:01 +0000 (0:00:00.486) 0:01:02.816 ********** 2025-04-13 00:50:22.984975 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:50:22.984994 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:50:22.985005 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:50:22.985015 | orchestrator | 2025-04-13 00:50:22.985026 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-04-13 00:50:22.985036 | orchestrator | Sunday 13 April 2025 00:49:01 +0000 (0:00:00.280) 0:01:03.096 ********** 2025-04-13 00:50:22.985046 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:50:22.985056 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:50:22.985065 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:50:22.985076 | orchestrator | 2025-04-13 00:50:22.985086 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-04-13 00:50:22.985096 | orchestrator | Sunday 13 April 2025 00:49:02 +0000 (0:00:00.462) 0:01:03.559 ********** 2025-04-13 00:50:22.985106 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:50:22.985116 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:50:22.985126 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:50:22.985136 | orchestrator | 2025-04-13 00:50:22.985146 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-04-13 00:50:22.985156 | orchestrator | Sunday 13 April 2025 00:49:02 +0000 (0:00:00.476) 0:01:04.035 ********** 2025-04-13 00:50:22.985166 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:50:22.985176 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:50:22.985186 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:50:22.985196 | orchestrator | 2025-04-13 00:50:22.985206 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-04-13 00:50:22.985216 | orchestrator | Sunday 13 April 2025 00:49:03 +0000 (0:00:00.689) 0:01:04.725 ********** 2025-04-13 00:50:22.985226 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:50:22.985236 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:50:22.985246 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:50:22.985256 | orchestrator | 2025-04-13 00:50:22.985271 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-04-13 00:50:22.985282 | orchestrator | Sunday 13 April 2025 00:49:04 +0000 (0:00:00.604) 0:01:05.329 ********** 2025-04-13 00:50:22.985305 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:50:22.985316 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:50:22.985326 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:50:22.985336 | orchestrator | 2025-04-13 00:50:22.985346 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-04-13 00:50:22.985356 | orchestrator | Sunday 13 April 2025 00:49:04 +0000 (0:00:00.771) 0:01:06.101 ********** 2025-04-13 00:50:22.985366 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:50:22.985377 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:50:22.985387 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:50:22.985397 | orchestrator | 2025-04-13 00:50:22.985407 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-04-13 00:50:22.985417 | orchestrator | Sunday 13 April 2025 00:49:06 +0000 (0:00:01.192) 0:01:07.294 ********** 2025-04-13 00:50:22.985427 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:50:22.985437 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:50:22.985447 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:50:22.985457 | orchestrator | 2025-04-13 00:50:22.985467 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-04-13 00:50:22.985477 | orchestrator | Sunday 13 April 2025 00:49:06 +0000 (0:00:00.422) 0:01:07.716 ********** 2025-04-13 00:50:22.985487 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:50:22.985497 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:50:22.985507 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:50:22.985517 | orchestrator | 2025-04-13 00:50:22.985527 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-04-13 00:50:22.985536 | orchestrator | Sunday 13 April 2025 00:49:07 +0000 (0:00:00.772) 0:01:08.488 ********** 2025-04-13 00:50:22.985546 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:50:22.985556 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:50:22.985566 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:50:22.985576 | orchestrator | 2025-04-13 00:50:22.985591 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-04-13 00:50:22.985601 | orchestrator | Sunday 13 April 2025 00:49:08 +0000 (0:00:01.154) 0:01:09.642 ********** 2025-04-13 00:50:22.985612 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:50:22.985622 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:50:22.985631 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:50:22.985641 | orchestrator | 2025-04-13 00:50:22.985652 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-04-13 00:50:22.985665 | orchestrator | Sunday 13 April 2025 00:49:09 +0000 (0:00:00.582) 0:01:10.225 ********** 2025-04-13 00:50:22.985676 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:50:22.985686 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:50:22.985696 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:50:22.985706 | orchestrator | 2025-04-13 00:50:22.985716 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-13 00:50:22.985726 | orchestrator | Sunday 13 April 2025 00:49:09 +0000 (0:00:00.330) 0:01:10.555 ********** 2025-04-13 00:50:22.985737 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:50:22.985747 | orchestrator | 2025-04-13 00:50:22.985757 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-04-13 00:50:22.985767 | orchestrator | Sunday 13 April 2025 00:49:10 +0000 (0:00:00.846) 0:01:11.402 ********** 2025-04-13 00:50:22.985777 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:50:22.985787 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:50:22.985797 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:50:22.985807 | orchestrator | 2025-04-13 00:50:22.985817 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-04-13 00:50:22.985833 | orchestrator | Sunday 13 April 2025 00:49:11 +0000 (0:00:00.911) 0:01:12.313 ********** 2025-04-13 00:50:22.985843 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:50:22.985853 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:50:22.985864 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:50:22.985874 | orchestrator | 2025-04-13 00:50:22.985884 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-04-13 00:50:22.985894 | orchestrator | Sunday 13 April 2025 00:49:11 +0000 (0:00:00.794) 0:01:13.108 ********** 2025-04-13 00:50:22.985904 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:50:22.985914 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:50:22.985925 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:50:22.985935 | orchestrator | 2025-04-13 00:50:22.985945 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-04-13 00:50:22.985955 | orchestrator | Sunday 13 April 2025 00:49:12 +0000 (0:00:00.924) 0:01:14.032 ********** 2025-04-13 00:50:22.985965 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:50:22.985975 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:50:22.985985 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:50:22.985995 | orchestrator | 2025-04-13 00:50:22.986005 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-04-13 00:50:22.986039 | orchestrator | Sunday 13 April 2025 00:49:13 +0000 (0:00:00.525) 0:01:14.558 ********** 2025-04-13 00:50:22.986051 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:50:22.986061 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:50:22.986071 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:50:22.986081 | orchestrator | 2025-04-13 00:50:22.986091 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-04-13 00:50:22.986101 | orchestrator | Sunday 13 April 2025 00:49:13 +0000 (0:00:00.454) 0:01:15.013 ********** 2025-04-13 00:50:22.986111 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:50:22.986125 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:50:22.986135 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:50:22.986146 | orchestrator | 2025-04-13 00:50:22.986156 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-04-13 00:50:22.986166 | orchestrator | Sunday 13 April 2025 00:49:14 +0000 (0:00:00.698) 0:01:15.711 ********** 2025-04-13 00:50:22.986176 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:50:22.986186 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:50:22.986196 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:50:22.986206 | orchestrator | 2025-04-13 00:50:22.986216 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-04-13 00:50:22.986226 | orchestrator | Sunday 13 April 2025 00:49:15 +0000 (0:00:00.847) 0:01:16.559 ********** 2025-04-13 00:50:22.986236 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:50:22.986246 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:50:22.986256 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:50:22.986266 | orchestrator | 2025-04-13 00:50:22.986276 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-04-13 00:50:22.986286 | orchestrator | Sunday 13 April 2025 00:49:15 +0000 (0:00:00.494) 0:01:17.053 ********** 2025-04-13 00:50:22.986333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986492 | orchestrator | 2025-04-13 00:50:22.986506 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-04-13 00:50:22.986516 | orchestrator | Sunday 13 April 2025 00:49:17 +0000 (0:00:01.490) 0:01:18.546 ********** 2025-04-13 00:50:22.986527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986636 | orchestrator | 2025-04-13 00:50:22.986646 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-04-13 00:50:22.986656 | orchestrator | Sunday 13 April 2025 00:49:22 +0000 (0:00:05.236) 0:01:23.782 ********** 2025-04-13 00:50:22.986666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.986777 | orchestrator | 2025-04-13 00:50:22.986787 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-13 00:50:22.986797 | orchestrator | Sunday 13 April 2025 00:49:24 +0000 (0:00:02.415) 0:01:26.198 ********** 2025-04-13 00:50:22.986807 | orchestrator | 2025-04-13 00:50:22.986818 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-13 00:50:22.986828 | orchestrator | Sunday 13 April 2025 00:49:25 +0000 (0:00:00.065) 0:01:26.263 ********** 2025-04-13 00:50:22.986838 | orchestrator | 2025-04-13 00:50:22.986848 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-13 00:50:22.986856 | orchestrator | Sunday 13 April 2025 00:49:25 +0000 (0:00:00.089) 0:01:26.353 ********** 2025-04-13 00:50:22.986864 | orchestrator | 2025-04-13 00:50:22.986878 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-04-13 00:50:22.986890 | orchestrator | Sunday 13 April 2025 00:49:25 +0000 (0:00:00.237) 0:01:26.590 ********** 2025-04-13 00:50:22.986898 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:50:22.986907 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:50:22.986915 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:50:22.986924 | orchestrator | 2025-04-13 00:50:22.986932 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-04-13 00:50:22.986941 | orchestrator | Sunday 13 April 2025 00:49:33 +0000 (0:00:07.660) 0:01:34.251 ********** 2025-04-13 00:50:22.986949 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:50:22.986958 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:50:22.986966 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:50:22.986974 | orchestrator | 2025-04-13 00:50:22.986983 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-04-13 00:50:22.986991 | orchestrator | Sunday 13 April 2025 00:49:35 +0000 (0:00:02.733) 0:01:36.984 ********** 2025-04-13 00:50:22.987000 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:50:22.987008 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:50:22.987017 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:50:22.987025 | orchestrator | 2025-04-13 00:50:22.987034 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-04-13 00:50:22.987042 | orchestrator | Sunday 13 April 2025 00:49:38 +0000 (0:00:02.794) 0:01:39.778 ********** 2025-04-13 00:50:22.987050 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:50:22.987059 | orchestrator | 2025-04-13 00:50:22.987067 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-04-13 00:50:22.987075 | orchestrator | Sunday 13 April 2025 00:49:38 +0000 (0:00:00.130) 0:01:39.909 ********** 2025-04-13 00:50:22.987084 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:50:22.987092 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:50:22.987101 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:50:22.987109 | orchestrator | 2025-04-13 00:50:22.987122 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-04-13 00:50:22.987131 | orchestrator | Sunday 13 April 2025 00:49:39 +0000 (0:00:01.098) 0:01:41.008 ********** 2025-04-13 00:50:22.987139 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:50:22.987148 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:50:22.987156 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:50:22.987164 | orchestrator | 2025-04-13 00:50:22.987173 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-04-13 00:50:22.987181 | orchestrator | Sunday 13 April 2025 00:49:40 +0000 (0:00:00.638) 0:01:41.647 ********** 2025-04-13 00:50:22.987190 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:50:22.987198 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:50:22.987207 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:50:22.987215 | orchestrator | 2025-04-13 00:50:22.987224 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-04-13 00:50:22.987232 | orchestrator | Sunday 13 April 2025 00:49:41 +0000 (0:00:01.067) 0:01:42.714 ********** 2025-04-13 00:50:22.987241 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:50:22.987249 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:50:22.987258 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:50:22.987266 | orchestrator | 2025-04-13 00:50:22.987275 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-04-13 00:50:22.987283 | orchestrator | Sunday 13 April 2025 00:49:42 +0000 (0:00:00.650) 0:01:43.365 ********** 2025-04-13 00:50:22.987308 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:50:22.987317 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:50:22.987326 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:50:22.987334 | orchestrator | 2025-04-13 00:50:22.987343 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-04-13 00:50:22.987351 | orchestrator | Sunday 13 April 2025 00:49:43 +0000 (0:00:01.282) 0:01:44.647 ********** 2025-04-13 00:50:22.987365 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:50:22.987373 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:50:22.987382 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:50:22.987390 | orchestrator | 2025-04-13 00:50:22.987399 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-04-13 00:50:22.987407 | orchestrator | Sunday 13 April 2025 00:49:44 +0000 (0:00:00.723) 0:01:45.371 ********** 2025-04-13 00:50:22.987416 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:50:22.987424 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:50:22.987432 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:50:22.987441 | orchestrator | 2025-04-13 00:50:22.987449 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-04-13 00:50:22.987458 | orchestrator | Sunday 13 April 2025 00:49:44 +0000 (0:00:00.573) 0:01:45.944 ********** 2025-04-13 00:50:22.987467 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987476 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987485 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987494 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987503 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987512 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987525 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987534 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987550 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987559 | orchestrator | 2025-04-13 00:50:22.987567 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-04-13 00:50:22.987576 | orchestrator | Sunday 13 April 2025 00:49:46 +0000 (0:00:01.593) 0:01:47.538 ********** 2025-04-13 00:50:22.987585 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987594 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987602 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987611 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987648 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987679 | orchestrator | 2025-04-13 00:50:22.987688 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-04-13 00:50:22.987697 | orchestrator | Sunday 13 April 2025 00:49:50 +0000 (0:00:03.992) 0:01:51.531 ********** 2025-04-13 00:50:22.987706 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987715 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987724 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987732 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987744 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987753 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987762 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987778 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987792 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 00:50:22.987801 | orchestrator | 2025-04-13 00:50:22.987810 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-13 00:50:22.987818 | orchestrator | Sunday 13 April 2025 00:49:53 +0000 (0:00:03.069) 0:01:54.601 ********** 2025-04-13 00:50:22.987827 | orchestrator | 2025-04-13 00:50:22.987835 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-13 00:50:22.987844 | orchestrator | Sunday 13 April 2025 00:49:53 +0000 (0:00:00.226) 0:01:54.828 ********** 2025-04-13 00:50:22.987852 | orchestrator | 2025-04-13 00:50:22.987864 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-13 00:50:22.987873 | orchestrator | Sunday 13 April 2025 00:49:53 +0000 (0:00:00.072) 0:01:54.901 ********** 2025-04-13 00:50:22.987881 | orchestrator | 2025-04-13 00:50:22.987890 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-04-13 00:50:22.987898 | orchestrator | Sunday 13 April 2025 00:49:53 +0000 (0:00:00.067) 0:01:54.968 ********** 2025-04-13 00:50:22.987907 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:50:22.987915 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:50:22.987924 | orchestrator | 2025-04-13 00:50:22.987932 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-04-13 00:50:22.987941 | orchestrator | Sunday 13 April 2025 00:50:00 +0000 (0:00:06.720) 0:02:01.688 ********** 2025-04-13 00:50:22.987949 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:50:22.987958 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:50:22.987966 | orchestrator | 2025-04-13 00:50:22.987975 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-04-13 00:50:22.987983 | orchestrator | Sunday 13 April 2025 00:50:06 +0000 (0:00:06.279) 0:02:07.968 ********** 2025-04-13 00:50:22.987992 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:50:22.988000 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:50:22.988009 | orchestrator | 2025-04-13 00:50:22.988017 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-04-13 00:50:22.988026 | orchestrator | Sunday 13 April 2025 00:50:13 +0000 (0:00:06.534) 0:02:14.502 ********** 2025-04-13 00:50:22.988034 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:50:22.988043 | orchestrator | 2025-04-13 00:50:22.988051 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-04-13 00:50:22.988060 | orchestrator | Sunday 13 April 2025 00:50:13 +0000 (0:00:00.287) 0:02:14.789 ********** 2025-04-13 00:50:22.988068 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:50:22.988076 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:50:22.988085 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:50:22.988093 | orchestrator | 2025-04-13 00:50:22.988102 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-04-13 00:50:22.988110 | orchestrator | Sunday 13 April 2025 00:50:14 +0000 (0:00:00.730) 0:02:15.519 ********** 2025-04-13 00:50:22.988119 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:50:22.988127 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:50:22.988136 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:50:22.988150 | orchestrator | 2025-04-13 00:50:22.988159 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-04-13 00:50:22.988167 | orchestrator | Sunday 13 April 2025 00:50:14 +0000 (0:00:00.657) 0:02:16.176 ********** 2025-04-13 00:50:22.988181 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:50:22.988190 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:50:22.988199 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:50:22.988208 | orchestrator | 2025-04-13 00:50:22.988216 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-04-13 00:50:22.988225 | orchestrator | Sunday 13 April 2025 00:50:15 +0000 (0:00:00.936) 0:02:17.113 ********** 2025-04-13 00:50:22.988234 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:50:22.988242 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:50:22.988250 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:50:22.988259 | orchestrator | 2025-04-13 00:50:22.988267 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-04-13 00:50:22.988276 | orchestrator | Sunday 13 April 2025 00:50:16 +0000 (0:00:00.974) 0:02:18.087 ********** 2025-04-13 00:50:22.988284 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:50:22.988309 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:50:22.988318 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:50:22.988326 | orchestrator | 2025-04-13 00:50:22.988335 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-04-13 00:50:22.988343 | orchestrator | Sunday 13 April 2025 00:50:18 +0000 (0:00:01.276) 0:02:19.364 ********** 2025-04-13 00:50:22.988352 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:50:22.988360 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:50:22.988369 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:50:22.988377 | orchestrator | 2025-04-13 00:50:22.988386 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:50:22.988395 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-04-13 00:50:22.988403 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-04-13 00:50:22.988416 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-04-13 00:50:22.988505 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:50:22.988522 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:50:22.988531 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 00:50:22.988540 | orchestrator | 2025-04-13 00:50:22.988548 | orchestrator | 2025-04-13 00:50:22.988557 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 00:50:22.988566 | orchestrator | Sunday 13 April 2025 00:50:19 +0000 (0:00:01.674) 0:02:21.038 ********** 2025-04-13 00:50:22.988574 | orchestrator | =============================================================================== 2025-04-13 00:50:22.988583 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 19.52s 2025-04-13 00:50:22.988591 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.78s 2025-04-13 00:50:22.988600 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.38s 2025-04-13 00:50:22.988608 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.33s 2025-04-13 00:50:22.988617 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.01s 2025-04-13 00:50:22.988625 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.24s 2025-04-13 00:50:22.988637 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.99s 2025-04-13 00:50:22.988646 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.07s 2025-04-13 00:50:22.988654 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.98s 2025-04-13 00:50:22.988668 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.42s 2025-04-13 00:50:22.988677 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.39s 2025-04-13 00:50:22.988685 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.34s 2025-04-13 00:50:22.988694 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.23s 2025-04-13 00:50:22.988702 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.12s 2025-04-13 00:50:22.988711 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.88s 2025-04-13 00:50:22.988719 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.87s 2025-04-13 00:50:22.988728 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.67s 2025-04-13 00:50:22.988737 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.59s 2025-04-13 00:50:22.988745 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.49s 2025-04-13 00:50:22.988754 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.47s 2025-04-13 00:50:22.988762 | orchestrator | 2025-04-13 00:50:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:50:22.988774 | orchestrator | 2025-04-13 00:50:22 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:50:26.028592 | orchestrator | 2025-04-13 00:50:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:50:26.028849 | orchestrator | 2025-04-13 00:50:26 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:50:29.070134 | orchestrator | 2025-04-13 00:50:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:50:29.070254 | orchestrator | 2025-04-13 00:50:26 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:50:29.070274 | orchestrator | 2025-04-13 00:50:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:50:29.070359 | orchestrator | 2025-04-13 00:50:29 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:50:29.073336 | orchestrator | 2025-04-13 00:50:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:50:32.124669 | orchestrator | 2025-04-13 00:50:29 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:50:32.124804 | orchestrator | 2025-04-13 00:50:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:50:32.124840 | orchestrator | 2025-04-13 00:50:32 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:50:32.125942 | orchestrator | 2025-04-13 00:50:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:50:32.127240 | orchestrator | 2025-04-13 00:50:32 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:50:35.172833 | orchestrator | 2025-04-13 00:50:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:50:35.173081 | orchestrator | 2025-04-13 00:50:35 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:50:35.178744 | orchestrator | 2025-04-13 00:50:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:50:38.222591 | orchestrator | 2025-04-13 00:50:35 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:50:38.222748 | orchestrator | 2025-04-13 00:50:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:50:38.222808 | orchestrator | 2025-04-13 00:50:38 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:50:38.225649 | orchestrator | 2025-04-13 00:50:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:50:38.231054 | orchestrator | 2025-04-13 00:50:38 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:50:41.287032 | orchestrator | 2025-04-13 00:50:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:50:41.287139 | orchestrator | 2025-04-13 00:50:41 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:50:41.288572 | orchestrator | 2025-04-13 00:50:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:50:41.291839 | orchestrator | 2025-04-13 00:50:41 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:50:44.350908 | orchestrator | 2025-04-13 00:50:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:50:44.351089 | orchestrator | 2025-04-13 00:50:44 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:50:44.352249 | orchestrator | 2025-04-13 00:50:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:50:44.355607 | orchestrator | 2025-04-13 00:50:44 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:50:47.411592 | orchestrator | 2025-04-13 00:50:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:50:47.411704 | orchestrator | 2025-04-13 00:50:47 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:50:47.416434 | orchestrator | 2025-04-13 00:50:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:50:47.417154 | orchestrator | 2025-04-13 00:50:47 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:50:50.464495 | orchestrator | 2025-04-13 00:50:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:50:50.464619 | orchestrator | 2025-04-13 00:50:50 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:50:50.467909 | orchestrator | 2025-04-13 00:50:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:50:50.468228 | orchestrator | 2025-04-13 00:50:50 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:50:53.499677 | orchestrator | 2025-04-13 00:50:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:50:53.499822 | orchestrator | 2025-04-13 00:50:53 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:50:53.500248 | orchestrator | 2025-04-13 00:50:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:50:53.500995 | orchestrator | 2025-04-13 00:50:53 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:50:56.558730 | orchestrator | 2025-04-13 00:50:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:50:56.558871 | orchestrator | 2025-04-13 00:50:56 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:50:56.559240 | orchestrator | 2025-04-13 00:50:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:50:56.560518 | orchestrator | 2025-04-13 00:50:56 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:50:56.560756 | orchestrator | 2025-04-13 00:50:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:50:59.607932 | orchestrator | 2025-04-13 00:50:59 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:50:59.609134 | orchestrator | 2025-04-13 00:50:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:50:59.610473 | orchestrator | 2025-04-13 00:50:59 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:51:02.673240 | orchestrator | 2025-04-13 00:50:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:51:02.673491 | orchestrator | 2025-04-13 00:51:02 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:51:02.675616 | orchestrator | 2025-04-13 00:51:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:51:02.677421 | orchestrator | 2025-04-13 00:51:02 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:51:05.724666 | orchestrator | 2025-04-13 00:51:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:51:05.724807 | orchestrator | 2025-04-13 00:51:05 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:51:05.726954 | orchestrator | 2025-04-13 00:51:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:51:05.727862 | orchestrator | 2025-04-13 00:51:05 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:51:08.795217 | orchestrator | 2025-04-13 00:51:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:51:08.795419 | orchestrator | 2025-04-13 00:51:08 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:51:08.796008 | orchestrator | 2025-04-13 00:51:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:51:08.798675 | orchestrator | 2025-04-13 00:51:08 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:51:11.848126 | orchestrator | 2025-04-13 00:51:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:51:11.848241 | orchestrator | 2025-04-13 00:51:11 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:51:11.850504 | orchestrator | 2025-04-13 00:51:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:51:11.852406 | orchestrator | 2025-04-13 00:51:11 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:51:14.909652 | orchestrator | 2025-04-13 00:51:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:51:14.909912 | orchestrator | 2025-04-13 00:51:14 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:51:14.910209 | orchestrator | 2025-04-13 00:51:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:51:14.910279 | orchestrator | 2025-04-13 00:51:14 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:51:14.910319 | orchestrator | 2025-04-13 00:51:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:51:17.959357 | orchestrator | 2025-04-13 00:51:17 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:51:17.959865 | orchestrator | 2025-04-13 00:51:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:51:17.962517 | orchestrator | 2025-04-13 00:51:17 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:51:21.023545 | orchestrator | 2025-04-13 00:51:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:51:21.023677 | orchestrator | 2025-04-13 00:51:21 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:51:21.024402 | orchestrator | 2025-04-13 00:51:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:51:21.026713 | orchestrator | 2025-04-13 00:51:21 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:51:24.082718 | orchestrator | 2025-04-13 00:51:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:51:24.082868 | orchestrator | 2025-04-13 00:51:24 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:51:24.083864 | orchestrator | 2025-04-13 00:51:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:51:24.084772 | orchestrator | 2025-04-13 00:51:24 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:51:24.084934 | orchestrator | 2025-04-13 00:51:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:51:27.130399 | orchestrator | 2025-04-13 00:51:27 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:51:27.132384 | orchestrator | 2025-04-13 00:51:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:51:27.134580 | orchestrator | 2025-04-13 00:51:27 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:51:30.176695 | orchestrator | 2025-04-13 00:51:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:51:30.176826 | orchestrator | 2025-04-13 00:51:30 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:51:30.177890 | orchestrator | 2025-04-13 00:51:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:51:30.177915 | orchestrator | 2025-04-13 00:51:30 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:51:33.222079 | orchestrator | 2025-04-13 00:51:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:51:33.222405 | orchestrator | 2025-04-13 00:51:33 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:51:33.223141 | orchestrator | 2025-04-13 00:51:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:51:33.223180 | orchestrator | 2025-04-13 00:51:33 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:51:36.267421 | orchestrator | 2025-04-13 00:51:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:51:36.267569 | orchestrator | 2025-04-13 00:51:36 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:51:36.267909 | orchestrator | 2025-04-13 00:51:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:51:36.271417 | orchestrator | 2025-04-13 00:51:36 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:51:39.317348 | orchestrator | 2025-04-13 00:51:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:51:39.317486 | orchestrator | 2025-04-13 00:51:39 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:51:39.318959 | orchestrator | 2025-04-13 00:51:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:51:39.320529 | orchestrator | 2025-04-13 00:51:39 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:51:39.320783 | orchestrator | 2025-04-13 00:51:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:51:42.374167 | orchestrator | 2025-04-13 00:51:42 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:51:42.374869 | orchestrator | 2025-04-13 00:51:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:51:42.377417 | orchestrator | 2025-04-13 00:51:42 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:51:45.431570 | orchestrator | 2025-04-13 00:51:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:51:45.431672 | orchestrator | 2025-04-13 00:51:45 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:51:45.431961 | orchestrator | 2025-04-13 00:51:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:51:45.434285 | orchestrator | 2025-04-13 00:51:45 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:51:45.435075 | orchestrator | 2025-04-13 00:51:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:51:48.472288 | orchestrator | 2025-04-13 00:51:48 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:51:48.474154 | orchestrator | 2025-04-13 00:51:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:51:48.474257 | orchestrator | 2025-04-13 00:51:48 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:51:51.507368 | orchestrator | 2025-04-13 00:51:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:51:51.507530 | orchestrator | 2025-04-13 00:51:51 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:51:51.513871 | orchestrator | 2025-04-13 00:51:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:51:51.515759 | orchestrator | 2025-04-13 00:51:51 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:51:54.570812 | orchestrator | 2025-04-13 00:51:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:51:54.570956 | orchestrator | 2025-04-13 00:51:54 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:51:57.605708 | orchestrator | 2025-04-13 00:51:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:51:57.605835 | orchestrator | 2025-04-13 00:51:54 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:51:57.605857 | orchestrator | 2025-04-13 00:51:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:51:57.605891 | orchestrator | 2025-04-13 00:51:57 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:51:57.609338 | orchestrator | 2025-04-13 00:51:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:51:57.609762 | orchestrator | 2025-04-13 00:51:57 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:52:00.651104 | orchestrator | 2025-04-13 00:51:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:52:00.651261 | orchestrator | 2025-04-13 00:52:00 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:52:00.653136 | orchestrator | 2025-04-13 00:52:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:52:00.654785 | orchestrator | 2025-04-13 00:52:00 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:52:00.654960 | orchestrator | 2025-04-13 00:52:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:52:03.708265 | orchestrator | 2025-04-13 00:52:03 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:52:06.748662 | orchestrator | 2025-04-13 00:52:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:52:06.748793 | orchestrator | 2025-04-13 00:52:03 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:52:06.748814 | orchestrator | 2025-04-13 00:52:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:52:06.748875 | orchestrator | 2025-04-13 00:52:06 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:52:06.749159 | orchestrator | 2025-04-13 00:52:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:52:06.749192 | orchestrator | 2025-04-13 00:52:06 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:52:09.807741 | orchestrator | 2025-04-13 00:52:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:52:09.807886 | orchestrator | 2025-04-13 00:52:09 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:52:09.808469 | orchestrator | 2025-04-13 00:52:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:52:09.808505 | orchestrator | 2025-04-13 00:52:09 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:52:12.852601 | orchestrator | 2025-04-13 00:52:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:52:12.852739 | orchestrator | 2025-04-13 00:52:12 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:52:12.853783 | orchestrator | 2025-04-13 00:52:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:52:12.855350 | orchestrator | 2025-04-13 00:52:12 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:52:15.896084 | orchestrator | 2025-04-13 00:52:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:52:15.896276 | orchestrator | 2025-04-13 00:52:15 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:52:15.897623 | orchestrator | 2025-04-13 00:52:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:52:15.899377 | orchestrator | 2025-04-13 00:52:15 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:52:15.899653 | orchestrator | 2025-04-13 00:52:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:52:18.950512 | orchestrator | 2025-04-13 00:52:18 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:52:18.951739 | orchestrator | 2025-04-13 00:52:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:52:18.953613 | orchestrator | 2025-04-13 00:52:18 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:52:22.013764 | orchestrator | 2025-04-13 00:52:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:52:22.013914 | orchestrator | 2025-04-13 00:52:22 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:52:22.015846 | orchestrator | 2025-04-13 00:52:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:52:22.015900 | orchestrator | 2025-04-13 00:52:22 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:52:25.075670 | orchestrator | 2025-04-13 00:52:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:52:25.075828 | orchestrator | 2025-04-13 00:52:25 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:52:25.079960 | orchestrator | 2025-04-13 00:52:25 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:52:25.081907 | orchestrator | 2025-04-13 00:52:25 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:52:25.082184 | orchestrator | 2025-04-13 00:52:25 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:52:28.121619 | orchestrator | 2025-04-13 00:52:28 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:52:28.122757 | orchestrator | 2025-04-13 00:52:28 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:52:31.197662 | orchestrator | 2025-04-13 00:52:28 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:52:31.197818 | orchestrator | 2025-04-13 00:52:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:52:31.197857 | orchestrator | 2025-04-13 00:52:31 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:52:31.199797 | orchestrator | 2025-04-13 00:52:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:52:31.207446 | orchestrator | 2025-04-13 00:52:31 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:52:34.259491 | orchestrator | 2025-04-13 00:52:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:52:34.259623 | orchestrator | 2025-04-13 00:52:34 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:52:34.267163 | orchestrator | 2025-04-13 00:52:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:52:34.269337 | orchestrator | 2025-04-13 00:52:34 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:52:37.323016 | orchestrator | 2025-04-13 00:52:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:52:37.323183 | orchestrator | 2025-04-13 00:52:37 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:52:37.324632 | orchestrator | 2025-04-13 00:52:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:52:37.326496 | orchestrator | 2025-04-13 00:52:37 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:52:40.371887 | orchestrator | 2025-04-13 00:52:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:52:40.371985 | orchestrator | 2025-04-13 00:52:40 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:52:40.375218 | orchestrator | 2025-04-13 00:52:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:52:40.376237 | orchestrator | 2025-04-13 00:52:40 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:52:43.437305 | orchestrator | 2025-04-13 00:52:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:52:43.437463 | orchestrator | 2025-04-13 00:52:43 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:52:43.438885 | orchestrator | 2025-04-13 00:52:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:52:43.440479 | orchestrator | 2025-04-13 00:52:43 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:52:43.440901 | orchestrator | 2025-04-13 00:52:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:52:46.479694 | orchestrator | 2025-04-13 00:52:46 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:52:46.480828 | orchestrator | 2025-04-13 00:52:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:52:46.480945 | orchestrator | 2025-04-13 00:52:46 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:52:49.548603 | orchestrator | 2025-04-13 00:52:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:52:49.548744 | orchestrator | 2025-04-13 00:52:49 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:52:49.550455 | orchestrator | 2025-04-13 00:52:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:52:49.552212 | orchestrator | 2025-04-13 00:52:49 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:52:52.595008 | orchestrator | 2025-04-13 00:52:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:52:52.595151 | orchestrator | 2025-04-13 00:52:52 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:52:52.595843 | orchestrator | 2025-04-13 00:52:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:52:52.596781 | orchestrator | 2025-04-13 00:52:52 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:52:55.642932 | orchestrator | 2025-04-13 00:52:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:52:55.643111 | orchestrator | 2025-04-13 00:52:55 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:52:55.643854 | orchestrator | 2025-04-13 00:52:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:52:55.643896 | orchestrator | 2025-04-13 00:52:55 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:52:58.700063 | orchestrator | 2025-04-13 00:52:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:52:58.700177 | orchestrator | 2025-04-13 00:52:58 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:52:58.700752 | orchestrator | 2025-04-13 00:52:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:52:58.701656 | orchestrator | 2025-04-13 00:52:58 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:52:58.702085 | orchestrator | 2025-04-13 00:52:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:53:01.753803 | orchestrator | 2025-04-13 00:53:01 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:53:01.757002 | orchestrator | 2025-04-13 00:53:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:53:01.760544 | orchestrator | 2025-04-13 00:53:01 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:53:01.761338 | orchestrator | 2025-04-13 00:53:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:53:04.805608 | orchestrator | 2025-04-13 00:53:04 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:53:04.807383 | orchestrator | 2025-04-13 00:53:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:53:04.808677 | orchestrator | 2025-04-13 00:53:04 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:53:04.809564 | orchestrator | 2025-04-13 00:53:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:53:07.857830 | orchestrator | 2025-04-13 00:53:07 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:53:07.863437 | orchestrator | 2025-04-13 00:53:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:53:07.864785 | orchestrator | 2025-04-13 00:53:07 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:53:07.865524 | orchestrator | 2025-04-13 00:53:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:53:10.911993 | orchestrator | 2025-04-13 00:53:10 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:53:10.912458 | orchestrator | 2025-04-13 00:53:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:53:10.913438 | orchestrator | 2025-04-13 00:53:10 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:53:10.913523 | orchestrator | 2025-04-13 00:53:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:53:13.962706 | orchestrator | 2025-04-13 00:53:13 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:53:13.964371 | orchestrator | 2025-04-13 00:53:13 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:53:13.965347 | orchestrator | 2025-04-13 00:53:13 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:53:17.023267 | orchestrator | 2025-04-13 00:53:13 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:53:17.023414 | orchestrator | 2025-04-13 00:53:17 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:53:17.024207 | orchestrator | 2025-04-13 00:53:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:53:17.025746 | orchestrator | 2025-04-13 00:53:17 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:53:17.026397 | orchestrator | 2025-04-13 00:53:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:53:20.085080 | orchestrator | 2025-04-13 00:53:20 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:53:20.086585 | orchestrator | 2025-04-13 00:53:20 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:53:20.089629 | orchestrator | 2025-04-13 00:53:20 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:53:20.089869 | orchestrator | 2025-04-13 00:53:20 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:53:23.136303 | orchestrator | 2025-04-13 00:53:23 | INFO  | Task af438553-6191-406b-a5bc-b173c5d8f3d4 is in state STARTED 2025-04-13 00:53:23.136645 | orchestrator | 2025-04-13 00:53:23 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:53:23.136688 | orchestrator | 2025-04-13 00:53:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:53:23.142461 | orchestrator | 2025-04-13 00:53:23 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:53:26.190750 | orchestrator | 2025-04-13 00:53:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:53:26.190898 | orchestrator | 2025-04-13 00:53:26 | INFO  | Task af438553-6191-406b-a5bc-b173c5d8f3d4 is in state STARTED 2025-04-13 00:53:26.193155 | orchestrator | 2025-04-13 00:53:26 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:53:26.194136 | orchestrator | 2025-04-13 00:53:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:53:26.194164 | orchestrator | 2025-04-13 00:53:26 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:53:26.196351 | orchestrator | 2025-04-13 00:53:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:53:29.237851 | orchestrator | 2025-04-13 00:53:29 | INFO  | Task af438553-6191-406b-a5bc-b173c5d8f3d4 is in state STARTED 2025-04-13 00:53:29.238593 | orchestrator | 2025-04-13 00:53:29 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:53:29.240232 | orchestrator | 2025-04-13 00:53:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:53:29.241169 | orchestrator | 2025-04-13 00:53:29 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:53:32.291328 | orchestrator | 2025-04-13 00:53:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:53:32.291486 | orchestrator | 2025-04-13 00:53:32 | INFO  | Task af438553-6191-406b-a5bc-b173c5d8f3d4 is in state STARTED 2025-04-13 00:53:32.293648 | orchestrator | 2025-04-13 00:53:32 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:53:32.295943 | orchestrator | 2025-04-13 00:53:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:53:32.297601 | orchestrator | 2025-04-13 00:53:32 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:53:35.347031 | orchestrator | 2025-04-13 00:53:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:53:35.347226 | orchestrator | 2025-04-13 00:53:35 | INFO  | Task af438553-6191-406b-a5bc-b173c5d8f3d4 is in state SUCCESS 2025-04-13 00:53:35.348359 | orchestrator | 2025-04-13 00:53:35 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:53:35.348489 | orchestrator | 2025-04-13 00:53:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:53:35.348981 | orchestrator | 2025-04-13 00:53:35 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:53:35.349114 | orchestrator | 2025-04-13 00:53:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:53:38.383653 | orchestrator | 2025-04-13 00:53:38 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:53:38.390560 | orchestrator | 2025-04-13 00:53:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:53:38.391409 | orchestrator | 2025-04-13 00:53:38 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:53:41.436811 | orchestrator | 2025-04-13 00:53:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:53:41.436941 | orchestrator | 2025-04-13 00:53:41 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:53:41.440382 | orchestrator | 2025-04-13 00:53:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:53:44.479815 | orchestrator | 2025-04-13 00:53:41 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:53:44.479941 | orchestrator | 2025-04-13 00:53:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:53:44.479980 | orchestrator | 2025-04-13 00:53:44 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:53:44.481117 | orchestrator | 2025-04-13 00:53:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:53:44.482839 | orchestrator | 2025-04-13 00:53:44 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:53:47.525428 | orchestrator | 2025-04-13 00:53:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:53:47.525596 | orchestrator | 2025-04-13 00:53:47 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:53:47.528106 | orchestrator | 2025-04-13 00:53:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:53:47.529458 | orchestrator | 2025-04-13 00:53:47 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:53:50.583295 | orchestrator | 2025-04-13 00:53:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:53:50.583446 | orchestrator | 2025-04-13 00:53:50 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:53:50.583547 | orchestrator | 2025-04-13 00:53:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:53:50.583946 | orchestrator | 2025-04-13 00:53:50 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:53:50.584026 | orchestrator | 2025-04-13 00:53:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:53:53.639645 | orchestrator | 2025-04-13 00:53:53 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:53:53.641460 | orchestrator | 2025-04-13 00:53:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:53:53.643341 | orchestrator | 2025-04-13 00:53:53 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:53:56.705871 | orchestrator | 2025-04-13 00:53:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:53:56.706083 | orchestrator | 2025-04-13 00:53:56 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:53:56.706783 | orchestrator | 2025-04-13 00:53:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:53:56.708693 | orchestrator | 2025-04-13 00:53:56 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:53:59.760279 | orchestrator | 2025-04-13 00:53:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:53:59.760424 | orchestrator | 2025-04-13 00:53:59 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:53:59.761755 | orchestrator | 2025-04-13 00:53:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:53:59.763723 | orchestrator | 2025-04-13 00:53:59 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:54:02.816241 | orchestrator | 2025-04-13 00:53:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:54:02.816333 | orchestrator | 2025-04-13 00:54:02 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:54:02.816508 | orchestrator | 2025-04-13 00:54:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:54:02.817272 | orchestrator | 2025-04-13 00:54:02 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:54:05.864547 | orchestrator | 2025-04-13 00:54:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:54:05.864688 | orchestrator | 2025-04-13 00:54:05 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:54:05.866233 | orchestrator | 2025-04-13 00:54:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:54:05.867936 | orchestrator | 2025-04-13 00:54:05 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:54:08.913893 | orchestrator | 2025-04-13 00:54:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:54:08.914113 | orchestrator | 2025-04-13 00:54:08 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:54:11.954380 | orchestrator | 2025-04-13 00:54:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:54:11.954569 | orchestrator | 2025-04-13 00:54:08 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:54:11.954592 | orchestrator | 2025-04-13 00:54:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:54:11.954651 | orchestrator | 2025-04-13 00:54:11 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:54:11.956269 | orchestrator | 2025-04-13 00:54:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:54:15.010147 | orchestrator | 2025-04-13 00:54:11 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:54:15.010348 | orchestrator | 2025-04-13 00:54:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:54:15.010389 | orchestrator | 2025-04-13 00:54:15 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state STARTED 2025-04-13 00:54:15.012708 | orchestrator | 2025-04-13 00:54:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:54:15.012753 | orchestrator | 2025-04-13 00:54:15 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:54:15.012856 | orchestrator | 2025-04-13 00:54:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:54:18.065077 | orchestrator | 2025-04-13 00:54:18 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:54:18.079087 | orchestrator | 2025-04-13 00:54:18 | INFO  | Task 9e270c5c-209d-4bf2-809d-0acee9f47c38 is in state SUCCESS 2025-04-13 00:54:18.080057 | orchestrator | 2025-04-13 00:54:18.080103 | orchestrator | None 2025-04-13 00:54:18.080119 | orchestrator | 2025-04-13 00:54:18.080133 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 00:54:18.080148 | orchestrator | 2025-04-13 00:54:18.080198 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 00:54:18.080215 | orchestrator | Sunday 13 April 2025 00:46:45 +0000 (0:00:00.299) 0:00:00.299 ********** 2025-04-13 00:54:18.080229 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:54:18.080245 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:54:18.080259 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:54:18.080273 | orchestrator | 2025-04-13 00:54:18.080286 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 00:54:18.080301 | orchestrator | Sunday 13 April 2025 00:46:45 +0000 (0:00:00.353) 0:00:00.653 ********** 2025-04-13 00:54:18.080316 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-04-13 00:54:18.080477 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-04-13 00:54:18.080498 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-04-13 00:54:18.080512 | orchestrator | 2025-04-13 00:54:18.080526 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-04-13 00:54:18.080539 | orchestrator | 2025-04-13 00:54:18.080553 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-04-13 00:54:18.080567 | orchestrator | Sunday 13 April 2025 00:46:46 +0000 (0:00:00.458) 0:00:01.112 ********** 2025-04-13 00:54:18.080581 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.080595 | orchestrator | 2025-04-13 00:54:18.080609 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-04-13 00:54:18.080622 | orchestrator | Sunday 13 April 2025 00:46:47 +0000 (0:00:00.901) 0:00:02.013 ********** 2025-04-13 00:54:18.080636 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:54:18.080652 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:54:18.080668 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:54:18.080683 | orchestrator | 2025-04-13 00:54:18.080699 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-04-13 00:54:18.080715 | orchestrator | Sunday 13 April 2025 00:46:48 +0000 (0:00:01.125) 0:00:03.139 ********** 2025-04-13 00:54:18.080731 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.080745 | orchestrator | 2025-04-13 00:54:18.080759 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-04-13 00:54:18.080773 | orchestrator | Sunday 13 April 2025 00:46:49 +0000 (0:00:01.374) 0:00:04.514 ********** 2025-04-13 00:54:18.080786 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:54:18.080800 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:54:18.080813 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:54:18.080852 | orchestrator | 2025-04-13 00:54:18.080883 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-04-13 00:54:18.080898 | orchestrator | Sunday 13 April 2025 00:46:51 +0000 (0:00:01.524) 0:00:06.039 ********** 2025-04-13 00:54:18.080912 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-13 00:54:18.080926 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-13 00:54:18.080940 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-13 00:54:18.080954 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-13 00:54:18.080968 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-13 00:54:18.080984 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-13 00:54:18.080997 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-13 00:54:18.081011 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-13 00:54:18.081025 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-13 00:54:18.081039 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-13 00:54:18.081053 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-13 00:54:18.081066 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-13 00:54:18.081080 | orchestrator | 2025-04-13 00:54:18.081094 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-04-13 00:54:18.081107 | orchestrator | Sunday 13 April 2025 00:46:56 +0000 (0:00:05.220) 0:00:11.259 ********** 2025-04-13 00:54:18.081121 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-04-13 00:54:18.081150 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-04-13 00:54:18.081190 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-04-13 00:54:18.081205 | orchestrator | 2025-04-13 00:54:18.081219 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-04-13 00:54:18.081232 | orchestrator | Sunday 13 April 2025 00:46:57 +0000 (0:00:01.274) 0:00:12.534 ********** 2025-04-13 00:54:18.081246 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-04-13 00:54:18.081260 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-04-13 00:54:18.081273 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-04-13 00:54:18.081287 | orchestrator | 2025-04-13 00:54:18.081300 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-04-13 00:54:18.081314 | orchestrator | Sunday 13 April 2025 00:46:59 +0000 (0:00:02.267) 0:00:14.801 ********** 2025-04-13 00:54:18.081328 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-04-13 00:54:18.081382 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.081409 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-04-13 00:54:18.081423 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.081437 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-04-13 00:54:18.081451 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.081464 | orchestrator | 2025-04-13 00:54:18.081478 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-04-13 00:54:18.081492 | orchestrator | Sunday 13 April 2025 00:47:00 +0000 (0:00:00.743) 0:00:15.545 ********** 2025-04-13 00:54:18.081508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-13 00:54:18.081539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-13 00:54:18.081555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-13 00:54:18.081570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-13 00:54:18.081585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-13 00:54:18.081607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-13 00:54:18.081623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-13 00:54:18.081645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-13 00:54:18.081660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-13 00:54:18.081675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-13 00:54:18.081690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-13 00:54:18.081704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-13 00:54:18.081719 | orchestrator | 2025-04-13 00:54:18.081733 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-04-13 00:54:18.081747 | orchestrator | Sunday 13 April 2025 00:47:02 +0000 (0:00:02.136) 0:00:17.681 ********** 2025-04-13 00:54:18.081761 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.081775 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.081789 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.081802 | orchestrator | 2025-04-13 00:54:18.081821 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-04-13 00:54:18.081835 | orchestrator | Sunday 13 April 2025 00:47:05 +0000 (0:00:02.317) 0:00:19.999 ********** 2025-04-13 00:54:18.081856 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-04-13 00:54:18.081870 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-04-13 00:54:18.081883 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-04-13 00:54:18.081897 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-04-13 00:54:18.081911 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-04-13 00:54:18.081924 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-04-13 00:54:18.081938 | orchestrator | 2025-04-13 00:54:18.081952 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-04-13 00:54:18.081965 | orchestrator | Sunday 13 April 2025 00:47:09 +0000 (0:00:04.406) 0:00:24.406 ********** 2025-04-13 00:54:18.081979 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.081993 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.082007 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.082079 | orchestrator | 2025-04-13 00:54:18.082097 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-04-13 00:54:18.082112 | orchestrator | Sunday 13 April 2025 00:47:11 +0000 (0:00:01.624) 0:00:26.030 ********** 2025-04-13 00:54:18.082126 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:54:18.082140 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:54:18.082154 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:54:18.082211 | orchestrator | 2025-04-13 00:54:18.082239 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-04-13 00:54:18.082262 | orchestrator | Sunday 13 April 2025 00:47:12 +0000 (0:00:01.732) 0:00:27.762 ********** 2025-04-13 00:54:18.082277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-13 00:54:18.082292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-13 00:54:18.082307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-13 00:54:18.082322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-13 00:54:18.082392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-13 00:54:18.082409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-13 00:54:18.082424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-13 00:54:18.082438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-13 00:54:18.082453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-13 00:54:18.082468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-13 00:54:18.082482 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.082504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-13 00:54:18.082519 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.082564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-13 00:54:18.082579 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.082593 | orchestrator | 2025-04-13 00:54:18.082607 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-04-13 00:54:18.082622 | orchestrator | Sunday 13 April 2025 00:47:16 +0000 (0:00:03.546) 0:00:31.308 ********** 2025-04-13 00:54:18.082636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-13 00:54:18.082651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-13 00:54:18.082665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-13 00:54:18.082679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-13 00:54:18.082707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-13 00:54:18.082722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-13 00:54:18.082736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-13 00:54:18.082751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-13 00:54:18.082765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-13 00:54:18.082780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-13 00:54:18.082801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-13 00:54:18.082868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-13 00:54:18.082887 | orchestrator | 2025-04-13 00:54:18.082901 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-04-13 00:54:18.082915 | orchestrator | Sunday 13 April 2025 00:47:20 +0000 (0:00:04.599) 0:00:35.908 ********** 2025-04-13 00:54:18.082929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-13 00:54:18.082955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-13 00:54:18.082970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-13 00:54:18.083105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-13 00:54:18.083130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-13 00:54:18.083273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-13 00:54:18.083299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-13 00:54:18.083314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-13 00:54:18.083329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-13 00:54:18.083343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-13 00:54:18.083367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-13 00:54:18.083382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-13 00:54:18.083395 | orchestrator | 2025-04-13 00:54:18.083407 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-04-13 00:54:18.083419 | orchestrator | Sunday 13 April 2025 00:47:25 +0000 (0:00:04.777) 0:00:40.686 ********** 2025-04-13 00:54:18.083437 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-13 00:54:18.083457 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-13 00:54:18.083470 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-13 00:54:18.083482 | orchestrator | 2025-04-13 00:54:18.083512 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-04-13 00:54:18.083538 | orchestrator | Sunday 13 April 2025 00:47:28 +0000 (0:00:02.292) 0:00:42.978 ********** 2025-04-13 00:54:18.083552 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-13 00:54:18.083565 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-13 00:54:18.083611 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-13 00:54:18.083624 | orchestrator | 2025-04-13 00:54:18.083636 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-04-13 00:54:18.083649 | orchestrator | Sunday 13 April 2025 00:47:31 +0000 (0:00:03.496) 0:00:46.474 ********** 2025-04-13 00:54:18.083661 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.083710 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.083724 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.083795 | orchestrator | 2025-04-13 00:54:18.083809 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-04-13 00:54:18.083822 | orchestrator | Sunday 13 April 2025 00:47:33 +0000 (0:00:01.601) 0:00:48.076 ********** 2025-04-13 00:54:18.083834 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-13 00:54:18.083847 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-13 00:54:18.083860 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-13 00:54:18.083880 | orchestrator | 2025-04-13 00:54:18.083892 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-04-13 00:54:18.083905 | orchestrator | Sunday 13 April 2025 00:47:35 +0000 (0:00:02.716) 0:00:50.793 ********** 2025-04-13 00:54:18.083917 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-13 00:54:18.083930 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-13 00:54:18.083942 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-13 00:54:18.083955 | orchestrator | 2025-04-13 00:54:18.083967 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-04-13 00:54:18.083979 | orchestrator | Sunday 13 April 2025 00:47:38 +0000 (0:00:02.496) 0:00:53.289 ********** 2025-04-13 00:54:18.083992 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-04-13 00:54:18.084020 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-04-13 00:54:18.084033 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-04-13 00:54:18.084046 | orchestrator | 2025-04-13 00:54:18.084058 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-04-13 00:54:18.084070 | orchestrator | Sunday 13 April 2025 00:47:40 +0000 (0:00:02.449) 0:00:55.738 ********** 2025-04-13 00:54:18.084083 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-04-13 00:54:18.084095 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-04-13 00:54:18.084108 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-04-13 00:54:18.084120 | orchestrator | 2025-04-13 00:54:18.084132 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-04-13 00:54:18.084145 | orchestrator | Sunday 13 April 2025 00:47:43 +0000 (0:00:02.402) 0:00:58.141 ********** 2025-04-13 00:54:18.084157 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.084197 | orchestrator | 2025-04-13 00:54:18.084210 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-04-13 00:54:18.084222 | orchestrator | Sunday 13 April 2025 00:47:44 +0000 (0:00:00.841) 0:00:58.982 ********** 2025-04-13 00:54:18.084235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-13 00:54:18.084257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-13 00:54:18.084276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-13 00:54:18.084295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-13 00:54:18.084309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-13 00:54:18.084322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-13 00:54:18.084335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-13 00:54:18.084359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-13 00:54:18.084379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-13 00:54:18.084398 | orchestrator | 2025-04-13 00:54:18.084411 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-04-13 00:54:18.084423 | orchestrator | Sunday 13 April 2025 00:47:47 +0000 (0:00:03.863) 0:01:02.846 ********** 2025-04-13 00:54:18.084436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-13 00:54:18.084449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-13 00:54:18.084462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-13 00:54:18.084474 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.084487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-13 00:54:18.084499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-13 00:54:18.084525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-13 00:54:18.084545 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.084558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-13 00:54:18.084571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-13 00:54:18.084584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-13 00:54:18.084596 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.084609 | orchestrator | 2025-04-13 00:54:18.084621 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-04-13 00:54:18.084633 | orchestrator | Sunday 13 April 2025 00:47:48 +0000 (0:00:00.867) 0:01:03.713 ********** 2025-04-13 00:54:18.084646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-13 00:54:18.084658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-13 00:54:18.084681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-13 00:54:18.084701 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.084714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-13 00:54:18.084726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-13 00:54:18.084740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-13 00:54:18.084752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-13 00:54:18.084765 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.084778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-13 00:54:18.084795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-13 00:54:18.084814 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.084826 | orchestrator | 2025-04-13 00:54:18.084839 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-04-13 00:54:18.084857 | orchestrator | Sunday 13 April 2025 00:47:49 +0000 (0:00:01.174) 0:01:04.888 ********** 2025-04-13 00:54:18.084869 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-13 00:54:18.084882 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-13 00:54:18.084895 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-13 00:54:18.084907 | orchestrator | 2025-04-13 00:54:18.084919 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-04-13 00:54:18.084932 | orchestrator | Sunday 13 April 2025 00:47:51 +0000 (0:00:01.782) 0:01:06.670 ********** 2025-04-13 00:54:18.084944 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-13 00:54:18.084956 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-13 00:54:18.084968 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-13 00:54:18.084981 | orchestrator | 2025-04-13 00:54:18.084993 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-04-13 00:54:18.085005 | orchestrator | Sunday 13 April 2025 00:47:53 +0000 (0:00:01.965) 0:01:08.636 ********** 2025-04-13 00:54:18.085017 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-13 00:54:18.085030 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-13 00:54:18.085042 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-13 00:54:18.085054 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-13 00:54:18.085066 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.085083 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-13 00:54:18.085096 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.085108 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-13 00:54:18.085120 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.085133 | orchestrator | 2025-04-13 00:54:18.085145 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-04-13 00:54:18.085157 | orchestrator | Sunday 13 April 2025 00:47:55 +0000 (0:00:01.891) 0:01:10.528 ********** 2025-04-13 00:54:18.085218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-13 00:54:18.085232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-13 00:54:18.085257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-13 00:54:18.085288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-13 00:54:18.085308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-13 00:54:18.085338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-13 00:54:18.085358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-13 00:54:18.085378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-13 00:54:18.085413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-13 00:54:18.085443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-13 00:54:18.085616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-13 00:54:18.085711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2', '__omit_place_holder__d7fd93e695623f439469e6785b1113ba2907edf2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-13 00:54:18.085741 | orchestrator | 2025-04-13 00:54:18.085797 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-04-13 00:54:18.085824 | orchestrator | Sunday 13 April 2025 00:47:59 +0000 (0:00:03.549) 0:01:14.077 ********** 2025-04-13 00:54:18.085846 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.085864 | orchestrator | 2025-04-13 00:54:18.085876 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-04-13 00:54:18.085889 | orchestrator | Sunday 13 April 2025 00:47:59 +0000 (0:00:00.852) 0:01:14.930 ********** 2025-04-13 00:54:18.085902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-13 00:54:18.085951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-13 00:54:18.085966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.085990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.086009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-13 00:54:18.086057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-13 00:54:18.086071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.086090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.086103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-13 00:54:18.086158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-13 00:54:18.086288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.086302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.086315 | orchestrator | 2025-04-13 00:54:18.086328 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-04-13 00:54:18.086396 | orchestrator | Sunday 13 April 2025 00:48:05 +0000 (0:00:05.043) 0:01:19.974 ********** 2025-04-13 00:54:18.086412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-13 00:54:18.086433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-13 00:54:18.086447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-13 00:54:18.086512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.086530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-13 00:54:18.086543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.086556 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.086569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.086588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-13 00:54:18.086601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.086614 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.086649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-13 00:54:18.086663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.086676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.086689 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.086701 | orchestrator | 2025-04-13 00:54:18.086714 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-04-13 00:54:18.086726 | orchestrator | Sunday 13 April 2025 00:48:05 +0000 (0:00:00.941) 0:01:20.915 ********** 2025-04-13 00:54:18.086746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-13 00:54:18.086762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-13 00:54:18.086774 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.086787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-13 00:54:18.086808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-13 00:54:18.086830 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.086852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-13 00:54:18.086874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-13 00:54:18.086896 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.086918 | orchestrator | 2025-04-13 00:54:18.086937 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-04-13 00:54:18.086960 | orchestrator | Sunday 13 April 2025 00:48:07 +0000 (0:00:01.306) 0:01:22.222 ********** 2025-04-13 00:54:18.086981 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.087039 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.087064 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.087277 | orchestrator | 2025-04-13 00:54:18.087294 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-04-13 00:54:18.087307 | orchestrator | Sunday 13 April 2025 00:48:08 +0000 (0:00:01.457) 0:01:23.679 ********** 2025-04-13 00:54:18.087319 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.087332 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.087344 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.087357 | orchestrator | 2025-04-13 00:54:18.087369 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-04-13 00:54:18.087382 | orchestrator | Sunday 13 April 2025 00:48:11 +0000 (0:00:03.003) 0:01:26.682 ********** 2025-04-13 00:54:18.087395 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.087407 | orchestrator | 2025-04-13 00:54:18.087420 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-04-13 00:54:18.087432 | orchestrator | Sunday 13 April 2025 00:48:12 +0000 (0:00:01.126) 0:01:27.809 ********** 2025-04-13 00:54:18.087464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.087503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.087545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.087560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.087574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.087617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.089876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.089940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.089974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.089990 | orchestrator | 2025-04-13 00:54:18.090856 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-04-13 00:54:18.090892 | orchestrator | Sunday 13 April 2025 00:48:18 +0000 (0:00:05.229) 0:01:33.038 ********** 2025-04-13 00:54:18.090908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.090938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.090965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.090979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.090993 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.091023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.091038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.091141 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.091237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.091275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.091288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.091301 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.091314 | orchestrator | 2025-04-13 00:54:18.091327 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-04-13 00:54:18.091339 | orchestrator | Sunday 13 April 2025 00:48:19 +0000 (0:00:01.065) 0:01:34.103 ********** 2025-04-13 00:54:18.091353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-13 00:54:18.091365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-13 00:54:18.091379 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.091392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-13 00:54:18.091411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-13 00:54:18.091424 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.091436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-13 00:54:18.091448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-13 00:54:18.091461 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.091475 | orchestrator | 2025-04-13 00:54:18.091489 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-04-13 00:54:18.091503 | orchestrator | Sunday 13 April 2025 00:48:20 +0000 (0:00:01.046) 0:01:35.150 ********** 2025-04-13 00:54:18.091517 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.091576 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.091591 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.091612 | orchestrator | 2025-04-13 00:54:18.091633 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-04-13 00:54:18.091669 | orchestrator | Sunday 13 April 2025 00:48:21 +0000 (0:00:01.458) 0:01:36.609 ********** 2025-04-13 00:54:18.091689 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.091771 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.091787 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.091799 | orchestrator | 2025-04-13 00:54:18.091811 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-04-13 00:54:18.091822 | orchestrator | Sunday 13 April 2025 00:48:23 +0000 (0:00:02.208) 0:01:38.818 ********** 2025-04-13 00:54:18.091834 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.091844 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.091854 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.091864 | orchestrator | 2025-04-13 00:54:18.091882 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-04-13 00:54:18.091892 | orchestrator | Sunday 13 April 2025 00:48:24 +0000 (0:00:00.335) 0:01:39.154 ********** 2025-04-13 00:54:18.091902 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.091912 | orchestrator | 2025-04-13 00:54:18.091922 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-04-13 00:54:18.091933 | orchestrator | Sunday 13 April 2025 00:48:25 +0000 (0:00:00.903) 0:01:40.057 ********** 2025-04-13 00:54:18.091957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-13 00:54:18.091970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-13 00:54:18.091981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-13 00:54:18.091991 | orchestrator | 2025-04-13 00:54:18.092002 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-04-13 00:54:18.092019 | orchestrator | Sunday 13 April 2025 00:48:28 +0000 (0:00:03.365) 0:01:43.422 ********** 2025-04-13 00:54:18.092030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-13 00:54:18.092040 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.092063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-13 00:54:18.092075 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.092086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-13 00:54:18.092096 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.092106 | orchestrator | 2025-04-13 00:54:18.092116 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-04-13 00:54:18.092127 | orchestrator | Sunday 13 April 2025 00:48:30 +0000 (0:00:01.601) 0:01:45.023 ********** 2025-04-13 00:54:18.092189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-13 00:54:18.092205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-13 00:54:18.092224 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.092235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-13 00:54:18.092245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-13 00:54:18.092256 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.092266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-13 00:54:18.092287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-13 00:54:18.092298 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.092308 | orchestrator | 2025-04-13 00:54:18.092318 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-04-13 00:54:18.092328 | orchestrator | Sunday 13 April 2025 00:48:32 +0000 (0:00:02.193) 0:01:47.217 ********** 2025-04-13 00:54:18.092338 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.092348 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.092358 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.092369 | orchestrator | 2025-04-13 00:54:18.092379 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-04-13 00:54:18.092389 | orchestrator | Sunday 13 April 2025 00:48:33 +0000 (0:00:00.790) 0:01:48.007 ********** 2025-04-13 00:54:18.092399 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.092409 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.092535 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.092560 | orchestrator | 2025-04-13 00:54:18.092581 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-04-13 00:54:18.092601 | orchestrator | Sunday 13 April 2025 00:48:34 +0000 (0:00:01.405) 0:01:49.413 ********** 2025-04-13 00:54:18.092621 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.092641 | orchestrator | 2025-04-13 00:54:18.092660 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-04-13 00:54:18.092677 | orchestrator | Sunday 13 April 2025 00:48:35 +0000 (0:00:00.837) 0:01:50.251 ********** 2025-04-13 00:54:18.092689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.092709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.092720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.092750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.092762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.092773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.092789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.092800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.092823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.092835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.092845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.092861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.092871 | orchestrator | 2025-04-13 00:54:18.092882 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-04-13 00:54:18.092896 | orchestrator | Sunday 13 April 2025 00:48:40 +0000 (0:00:05.597) 0:01:55.848 ********** 2025-04-13 00:54:18.092907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.092924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.092986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.093012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.093117 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.093139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.093228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.093248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.093342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.093367 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.093387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.093434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.093446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.093457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.093467 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.093478 | orchestrator | 2025-04-13 00:54:18.093488 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-04-13 00:54:18.093498 | orchestrator | Sunday 13 April 2025 00:48:42 +0000 (0:00:01.618) 0:01:57.467 ********** 2025-04-13 00:54:18.093509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-13 00:54:18.093525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-13 00:54:18.093537 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.093547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-13 00:54:18.093557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-13 00:54:18.093568 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.093584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-13 00:54:18.093594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-13 00:54:18.093605 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.093615 | orchestrator | 2025-04-13 00:54:18.093625 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-04-13 00:54:18.093635 | orchestrator | Sunday 13 April 2025 00:48:44 +0000 (0:00:02.053) 0:01:59.521 ********** 2025-04-13 00:54:18.093645 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.093655 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.093665 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.093675 | orchestrator | 2025-04-13 00:54:18.093685 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-04-13 00:54:18.093695 | orchestrator | Sunday 13 April 2025 00:48:45 +0000 (0:00:01.339) 0:02:00.861 ********** 2025-04-13 00:54:18.093705 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.093715 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.093725 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.093736 | orchestrator | 2025-04-13 00:54:18.093747 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-04-13 00:54:18.093764 | orchestrator | Sunday 13 April 2025 00:48:48 +0000 (0:00:02.174) 0:02:03.035 ********** 2025-04-13 00:54:18.093782 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.093798 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.093945 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.094108 | orchestrator | 2025-04-13 00:54:18.094122 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-04-13 00:54:18.094132 | orchestrator | Sunday 13 April 2025 00:48:48 +0000 (0:00:00.478) 0:02:03.513 ********** 2025-04-13 00:54:18.094143 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.094153 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.094221 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.094234 | orchestrator | 2025-04-13 00:54:18.094244 | orchestrator | TASK [include_role : designate] ************************************************ 2025-04-13 00:54:18.094254 | orchestrator | Sunday 13 April 2025 00:48:48 +0000 (0:00:00.295) 0:02:03.808 ********** 2025-04-13 00:54:18.094264 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.094274 | orchestrator | 2025-04-13 00:54:18.094284 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-04-13 00:54:18.094294 | orchestrator | Sunday 13 April 2025 00:48:49 +0000 (0:00:01.053) 0:02:04.862 ********** 2025-04-13 00:54:18.094306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-13 00:54:18.094326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-13 00:54:18.094361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.094373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.094385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.094395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.094405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.094423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-13 00:54:18.094447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-13 00:54:18.094458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.094469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.094479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.094490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.094500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.094529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-13 00:54:18.094540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-13 00:54:18.094551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.094561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.094572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.094582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.094612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.094681 | orchestrator | 2025-04-13 00:54:18.094706 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-04-13 00:54:18.094767 | orchestrator | Sunday 13 April 2025 00:48:54 +0000 (0:00:04.717) 0:02:09.579 ********** 2025-04-13 00:54:18.094786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-13 00:54:18.094802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-13 00:54:18.094817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.094841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.094850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.094872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.094881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.094890 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.094899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-13 00:54:18.094908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-13 00:54:18.094928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.094954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.095030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.095061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.095079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.095094 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.095103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-13 00:54:18.095121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-13 00:54:18.095137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.095146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.095182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.095193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.095202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.095211 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.095219 | orchestrator | 2025-04-13 00:54:18.095228 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-04-13 00:54:18.095236 | orchestrator | Sunday 13 April 2025 00:48:55 +0000 (0:00:01.022) 0:02:10.601 ********** 2025-04-13 00:54:18.095245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-13 00:54:18.095254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-13 00:54:18.095269 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.095278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-13 00:54:18.095287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-13 00:54:18.095296 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.095304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-13 00:54:18.095313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-13 00:54:18.095321 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.095330 | orchestrator | 2025-04-13 00:54:18.095338 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-04-13 00:54:18.095347 | orchestrator | Sunday 13 April 2025 00:48:57 +0000 (0:00:01.407) 0:02:12.009 ********** 2025-04-13 00:54:18.095355 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.095364 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.095373 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.095381 | orchestrator | 2025-04-13 00:54:18.095389 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-04-13 00:54:18.095398 | orchestrator | Sunday 13 April 2025 00:48:58 +0000 (0:00:01.600) 0:02:13.610 ********** 2025-04-13 00:54:18.095406 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.095415 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.095423 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.095432 | orchestrator | 2025-04-13 00:54:18.095440 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-04-13 00:54:18.095449 | orchestrator | Sunday 13 April 2025 00:49:00 +0000 (0:00:02.188) 0:02:15.798 ********** 2025-04-13 00:54:18.095457 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.095465 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.095474 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.095482 | orchestrator | 2025-04-13 00:54:18.095491 | orchestrator | TASK [include_role : glance] *************************************************** 2025-04-13 00:54:18.095503 | orchestrator | Sunday 13 April 2025 00:49:01 +0000 (0:00:00.527) 0:02:16.326 ********** 2025-04-13 00:54:18.095512 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.095520 | orchestrator | 2025-04-13 00:54:18.095529 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-04-13 00:54:18.095538 | orchestrator | Sunday 13 April 2025 00:49:02 +0000 (0:00:01.156) 0:02:17.483 ********** 2025-04-13 00:54:18.095554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-13 00:54:18.095569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-13 00:54:18.095589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-13 00:54:18.095611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-13 00:54:18.095627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-13 00:54:18.095643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-13 00:54:18.095656 | orchestrator | 2025-04-13 00:54:18.095665 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-04-13 00:54:18.095673 | orchestrator | Sunday 13 April 2025 00:49:10 +0000 (0:00:07.836) 0:02:25.319 ********** 2025-04-13 00:54:18.095687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-13 00:54:18.095697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-13 00:54:18.095716 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.095731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-13 00:54:18.095746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-13 00:54:18.095761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-13 00:54:18.095778 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.095793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-13 00:54:18.095812 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.095821 | orchestrator | 2025-04-13 00:54:18.095830 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-04-13 00:54:18.095842 | orchestrator | Sunday 13 April 2025 00:49:16 +0000 (0:00:05.756) 0:02:31.075 ********** 2025-04-13 00:54:18.095852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-13 00:54:18.095861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-13 00:54:18.095870 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.095879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-13 00:54:18.095893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-13 00:54:18.095902 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.095911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-13 00:54:18.095924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-13 00:54:18.095933 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.095941 | orchestrator | 2025-04-13 00:54:18.095950 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-04-13 00:54:18.095958 | orchestrator | Sunday 13 April 2025 00:49:21 +0000 (0:00:05.305) 0:02:36.381 ********** 2025-04-13 00:54:18.095967 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.095975 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.095984 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.095992 | orchestrator | 2025-04-13 00:54:18.096001 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-04-13 00:54:18.096009 | orchestrator | Sunday 13 April 2025 00:49:22 +0000 (0:00:01.225) 0:02:37.606 ********** 2025-04-13 00:54:18.096018 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.096026 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.096035 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.096043 | orchestrator | 2025-04-13 00:54:18.096052 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-04-13 00:54:18.096060 | orchestrator | Sunday 13 April 2025 00:49:24 +0000 (0:00:02.059) 0:02:39.666 ********** 2025-04-13 00:54:18.096069 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.096077 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.096086 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.096094 | orchestrator | 2025-04-13 00:54:18.096103 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-04-13 00:54:18.096111 | orchestrator | Sunday 13 April 2025 00:49:25 +0000 (0:00:00.553) 0:02:40.219 ********** 2025-04-13 00:54:18.096120 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.096128 | orchestrator | 2025-04-13 00:54:18.096137 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-04-13 00:54:18.096145 | orchestrator | Sunday 13 April 2025 00:49:26 +0000 (0:00:01.374) 0:02:41.594 ********** 2025-04-13 00:54:18.096154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-13 00:54:18.096186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-13 00:54:18.096210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-13 00:54:18.096230 | orchestrator | 2025-04-13 00:54:18.096239 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-04-13 00:54:18.096248 | orchestrator | Sunday 13 April 2025 00:49:30 +0000 (0:00:03.749) 0:02:45.343 ********** 2025-04-13 00:54:18.096257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-13 00:54:18.096266 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.096275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-13 00:54:18.096285 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.096294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-13 00:54:18.096303 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.096311 | orchestrator | 2025-04-13 00:54:18.096320 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-04-13 00:54:18.096328 | orchestrator | Sunday 13 April 2025 00:49:30 +0000 (0:00:00.385) 0:02:45.729 ********** 2025-04-13 00:54:18.096337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-13 00:54:18.096350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-13 00:54:18.096364 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.096386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-13 00:54:18.096401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-13 00:54:18.096415 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.096428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-13 00:54:18.096441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-13 00:54:18.096450 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.096458 | orchestrator | 2025-04-13 00:54:18.096467 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-04-13 00:54:18.096475 | orchestrator | Sunday 13 April 2025 00:49:31 +0000 (0:00:01.051) 0:02:46.781 ********** 2025-04-13 00:54:18.096484 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.096492 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.096501 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.096509 | orchestrator | 2025-04-13 00:54:18.096518 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-04-13 00:54:18.096526 | orchestrator | Sunday 13 April 2025 00:49:33 +0000 (0:00:01.302) 0:02:48.083 ********** 2025-04-13 00:54:18.096534 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.096543 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.096551 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.096559 | orchestrator | 2025-04-13 00:54:18.096568 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-04-13 00:54:18.096576 | orchestrator | Sunday 13 April 2025 00:49:35 +0000 (0:00:02.338) 0:02:50.422 ********** 2025-04-13 00:54:18.096585 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.096593 | orchestrator | 2025-04-13 00:54:18.096601 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-04-13 00:54:18.096610 | orchestrator | Sunday 13 April 2025 00:49:36 +0000 (0:00:01.292) 0:02:51.715 ********** 2025-04-13 00:54:18.096619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.096637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.096651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.096666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.096676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.096685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.096694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.096708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.096723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.096732 | orchestrator | 2025-04-13 00:54:18.096745 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-04-13 00:54:18.096754 | orchestrator | Sunday 13 April 2025 00:49:43 +0000 (0:00:07.158) 0:02:58.873 ********** 2025-04-13 00:54:18.096762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.096772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.096780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.096794 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.096802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.096822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.096832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.096841 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.096849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.096858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.096872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.096882 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.096891 | orchestrator | 2025-04-13 00:54:18.096900 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-04-13 00:54:18.096908 | orchestrator | Sunday 13 April 2025 00:49:45 +0000 (0:00:01.176) 0:03:00.050 ********** 2025-04-13 00:54:18.096918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-13 00:54:18.096934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-13 00:54:18.096950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-13 00:54:18.096982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-13 00:54:18.096995 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.097004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-13 00:54:18.097013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-13 00:54:18.097028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-13 00:54:18.097037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-13 00:54:18.097045 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.097057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-13 00:54:18.097066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-13 00:54:18.097080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-13 00:54:18.097089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-13 00:54:18.097097 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.097106 | orchestrator | 2025-04-13 00:54:18.097114 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-04-13 00:54:18.097123 | orchestrator | Sunday 13 April 2025 00:49:46 +0000 (0:00:01.544) 0:03:01.594 ********** 2025-04-13 00:54:18.097131 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.097140 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.097148 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.097157 | orchestrator | 2025-04-13 00:54:18.097186 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-04-13 00:54:18.097194 | orchestrator | Sunday 13 April 2025 00:49:48 +0000 (0:00:01.556) 0:03:03.151 ********** 2025-04-13 00:54:18.097203 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.097211 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.097220 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.097228 | orchestrator | 2025-04-13 00:54:18.097241 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-04-13 00:54:18.097250 | orchestrator | Sunday 13 April 2025 00:49:50 +0000 (0:00:02.197) 0:03:05.348 ********** 2025-04-13 00:54:18.097258 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.097267 | orchestrator | 2025-04-13 00:54:18.097275 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-04-13 00:54:18.097284 | orchestrator | Sunday 13 April 2025 00:49:51 +0000 (0:00:01.071) 0:03:06.420 ********** 2025-04-13 00:54:18.097313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-13 00:54:18.097329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-13 00:54:18.097345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-13 00:54:18.097366 | orchestrator | 2025-04-13 00:54:18.097375 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-04-13 00:54:18.097384 | orchestrator | Sunday 13 April 2025 00:49:55 +0000 (0:00:04.327) 0:03:10.747 ********** 2025-04-13 00:54:18.097393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-13 00:54:18.097405 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.097423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-13 00:54:18.097437 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.097446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-13 00:54:18.097462 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.097471 | orchestrator | 2025-04-13 00:54:18.097483 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-04-13 00:54:18.097492 | orchestrator | Sunday 13 April 2025 00:49:56 +0000 (0:00:00.900) 0:03:11.648 ********** 2025-04-13 00:54:18.097501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-13 00:54:18.097515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-13 00:54:18.097526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-13 00:54:18.097536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-13 00:54:18.097545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-13 00:54:18.097554 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.097566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-13 00:54:18.097576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-13 00:54:18.097585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-13 00:54:18.097594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-13 00:54:18.097603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-13 00:54:18.097611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-13 00:54:18.097620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-13 00:54:18.097632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-13 00:54:18.097646 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.097655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-13 00:54:18.097664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-13 00:54:18.097672 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.097681 | orchestrator | 2025-04-13 00:54:18.097690 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-04-13 00:54:18.097698 | orchestrator | Sunday 13 April 2025 00:49:57 +0000 (0:00:01.267) 0:03:12.915 ********** 2025-04-13 00:54:18.097707 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.097715 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.097724 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.097732 | orchestrator | 2025-04-13 00:54:18.097741 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-04-13 00:54:18.097749 | orchestrator | Sunday 13 April 2025 00:49:59 +0000 (0:00:01.406) 0:03:14.321 ********** 2025-04-13 00:54:18.097758 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.097766 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.097775 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.097783 | orchestrator | 2025-04-13 00:54:18.097793 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-04-13 00:54:18.097808 | orchestrator | Sunday 13 April 2025 00:50:01 +0000 (0:00:02.384) 0:03:16.705 ********** 2025-04-13 00:54:18.097848 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.097858 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.097866 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.097875 | orchestrator | 2025-04-13 00:54:18.097884 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-04-13 00:54:18.097892 | orchestrator | Sunday 13 April 2025 00:50:02 +0000 (0:00:00.493) 0:03:17.199 ********** 2025-04-13 00:54:18.097900 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.097909 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.097917 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.097926 | orchestrator | 2025-04-13 00:54:18.097935 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-04-13 00:54:18.097943 | orchestrator | Sunday 13 April 2025 00:50:02 +0000 (0:00:00.288) 0:03:17.488 ********** 2025-04-13 00:54:18.097951 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.097960 | orchestrator | 2025-04-13 00:54:18.097969 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-04-13 00:54:18.097977 | orchestrator | Sunday 13 April 2025 00:50:03 +0000 (0:00:01.275) 0:03:18.763 ********** 2025-04-13 00:54:18.097986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-13 00:54:18.098005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-13 00:54:18.098053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-13 00:54:18.098066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-13 00:54:18.098076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-13 00:54:18.098086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-13 00:54:18.098095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-13 00:54:18.098116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-13 00:54:18.098126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-13 00:54:18.098135 | orchestrator | 2025-04-13 00:54:18.098144 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-04-13 00:54:18.098153 | orchestrator | Sunday 13 April 2025 00:50:08 +0000 (0:00:04.286) 0:03:23.050 ********** 2025-04-13 00:54:18.098280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-13 00:54:18.098299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-13 00:54:18.098317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-13 00:54:18.098326 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.098347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-13 00:54:18.098357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-13 00:54:18.098366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-13 00:54:18.098375 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.098384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-13 00:54:18.098398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-13 00:54:18.098407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-13 00:54:18.098416 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.098424 | orchestrator | 2025-04-13 00:54:18.098433 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-04-13 00:54:18.098442 | orchestrator | Sunday 13 April 2025 00:50:09 +0000 (0:00:01.046) 0:03:24.097 ********** 2025-04-13 00:54:18.098455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-13 00:54:18.098469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-13 00:54:18.098478 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.098490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-13 00:54:18.098500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-13 00:54:18.098508 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.098517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-13 00:54:18.098526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-13 00:54:18.098535 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.098543 | orchestrator | 2025-04-13 00:54:18.098552 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-04-13 00:54:18.098560 | orchestrator | Sunday 13 April 2025 00:50:10 +0000 (0:00:00.967) 0:03:25.065 ********** 2025-04-13 00:54:18.098569 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.098582 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.098590 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.098599 | orchestrator | 2025-04-13 00:54:18.098608 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-04-13 00:54:18.098616 | orchestrator | Sunday 13 April 2025 00:50:11 +0000 (0:00:01.397) 0:03:26.462 ********** 2025-04-13 00:54:18.098625 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.098633 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.098642 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.098650 | orchestrator | 2025-04-13 00:54:18.098659 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-04-13 00:54:18.098667 | orchestrator | Sunday 13 April 2025 00:50:13 +0000 (0:00:02.300) 0:03:28.763 ********** 2025-04-13 00:54:18.098676 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.098684 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.098693 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.098701 | orchestrator | 2025-04-13 00:54:18.098714 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-04-13 00:54:18.098723 | orchestrator | Sunday 13 April 2025 00:50:14 +0000 (0:00:00.329) 0:03:29.092 ********** 2025-04-13 00:54:18.098732 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.098740 | orchestrator | 2025-04-13 00:54:18.098749 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-04-13 00:54:18.098757 | orchestrator | Sunday 13 April 2025 00:50:15 +0000 (0:00:01.431) 0:03:30.524 ********** 2025-04-13 00:54:18.098766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-13 00:54:18.098780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.098790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-13 00:54:18.098804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.098813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-13 00:54:18.098822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.098831 | orchestrator | 2025-04-13 00:54:18.098840 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-04-13 00:54:18.098849 | orchestrator | Sunday 13 April 2025 00:50:20 +0000 (0:00:05.308) 0:03:35.833 ********** 2025-04-13 00:54:18.098862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-13 00:54:18.098871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.098884 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.098893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-13 00:54:18.098903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.098911 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.098924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-13 00:54:18.098933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.098947 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.098955 | orchestrator | 2025-04-13 00:54:18.098964 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-04-13 00:54:18.098973 | orchestrator | Sunday 13 April 2025 00:50:21 +0000 (0:00:01.103) 0:03:36.937 ********** 2025-04-13 00:54:18.098982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-13 00:54:18.098991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-13 00:54:18.099003 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.099012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-13 00:54:18.099020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-13 00:54:18.099029 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.099038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-13 00:54:18.099046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-13 00:54:18.099054 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.099063 | orchestrator | 2025-04-13 00:54:18.099071 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-04-13 00:54:18.099080 | orchestrator | Sunday 13 April 2025 00:50:23 +0000 (0:00:01.362) 0:03:38.300 ********** 2025-04-13 00:54:18.099088 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.099097 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.099105 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.099114 | orchestrator | 2025-04-13 00:54:18.099123 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-04-13 00:54:18.099131 | orchestrator | Sunday 13 April 2025 00:50:24 +0000 (0:00:01.384) 0:03:39.684 ********** 2025-04-13 00:54:18.099139 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.099148 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.099157 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.099224 | orchestrator | 2025-04-13 00:54:18.099234 | orchestrator | TASK [include_role : manila] *************************************************** 2025-04-13 00:54:18.099243 | orchestrator | Sunday 13 April 2025 00:50:27 +0000 (0:00:02.312) 0:03:41.997 ********** 2025-04-13 00:54:18.099252 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.099260 | orchestrator | 2025-04-13 00:54:18.099269 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-04-13 00:54:18.099277 | orchestrator | Sunday 13 April 2025 00:50:28 +0000 (0:00:01.185) 0:03:43.182 ********** 2025-04-13 00:54:18.099291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-13 00:54:18.099307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.099317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.099326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.099335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-13 00:54:18.099344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.099353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.099371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.099381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-13 00:54:18.099390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.099399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.099408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.099417 | orchestrator | 2025-04-13 00:54:18.099426 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-04-13 00:54:18.099434 | orchestrator | Sunday 13 April 2025 00:50:32 +0000 (0:00:04.271) 0:03:47.454 ********** 2025-04-13 00:54:18.099452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-13 00:54:18.099462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-13 00:54:18.099471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.099480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.099489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.099497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.099511 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.099524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.099534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.099543 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.099553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-13 00:54:18.099562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.099571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.099580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.099594 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.099602 | orchestrator | 2025-04-13 00:54:18.099611 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-04-13 00:54:18.099620 | orchestrator | Sunday 13 April 2025 00:50:33 +0000 (0:00:00.708) 0:03:48.162 ********** 2025-04-13 00:54:18.099628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-13 00:54:18.099641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-13 00:54:18.099650 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.099659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-13 00:54:18.099667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-13 00:54:18.099676 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.099684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-13 00:54:18.099693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-13 00:54:18.099702 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.099710 | orchestrator | 2025-04-13 00:54:18.099719 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-04-13 00:54:18.099727 | orchestrator | Sunday 13 April 2025 00:50:34 +0000 (0:00:01.056) 0:03:49.219 ********** 2025-04-13 00:54:18.099736 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.099744 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.099751 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.099759 | orchestrator | 2025-04-13 00:54:18.099767 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-04-13 00:54:18.099775 | orchestrator | Sunday 13 April 2025 00:50:35 +0000 (0:00:01.288) 0:03:50.507 ********** 2025-04-13 00:54:18.099783 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.099791 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.099799 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.099807 | orchestrator | 2025-04-13 00:54:18.099815 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-04-13 00:54:18.099823 | orchestrator | Sunday 13 April 2025 00:50:37 +0000 (0:00:02.339) 0:03:52.847 ********** 2025-04-13 00:54:18.099831 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.099839 | orchestrator | 2025-04-13 00:54:18.099847 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-04-13 00:54:18.099855 | orchestrator | Sunday 13 April 2025 00:50:39 +0000 (0:00:01.531) 0:03:54.379 ********** 2025-04-13 00:54:18.099863 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-13 00:54:18.099871 | orchestrator | 2025-04-13 00:54:18.099879 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-04-13 00:54:18.099894 | orchestrator | Sunday 13 April 2025 00:50:42 +0000 (0:00:03.312) 0:03:57.691 ********** 2025-04-13 00:54:18.099902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-13 00:54:18.099919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-13 00:54:18.099927 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.099936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-13 00:54:18.099949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-13 00:54:18.099958 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.099970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-13 00:54:18.099980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-13 00:54:18.099988 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.099996 | orchestrator | 2025-04-13 00:54:18.100004 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-04-13 00:54:18.100012 | orchestrator | Sunday 13 April 2025 00:50:45 +0000 (0:00:02.920) 0:04:00.612 ********** 2025-04-13 00:54:18.100025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-13 00:54:18.100037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-13 00:54:18.100046 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.100054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-13 00:54:18.100068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-13 00:54:18.100076 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.100097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-13 00:54:18.100107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-13 00:54:18.100115 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.100123 | orchestrator | 2025-04-13 00:54:18.100131 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-04-13 00:54:18.100139 | orchestrator | Sunday 13 April 2025 00:50:48 +0000 (0:00:03.260) 0:04:03.872 ********** 2025-04-13 00:54:18.100147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-13 00:54:18.100179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-13 00:54:18.100196 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.100210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-13 00:54:18.100222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-13 00:54:18.100230 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.100243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-13 00:54:18.100259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-13 00:54:18.100268 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.100276 | orchestrator | 2025-04-13 00:54:18.100284 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-04-13 00:54:18.100292 | orchestrator | Sunday 13 April 2025 00:50:52 +0000 (0:00:03.449) 0:04:07.321 ********** 2025-04-13 00:54:18.100309 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.100318 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.100326 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.100334 | orchestrator | 2025-04-13 00:54:18.100342 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-04-13 00:54:18.100350 | orchestrator | Sunday 13 April 2025 00:50:54 +0000 (0:00:02.213) 0:04:09.535 ********** 2025-04-13 00:54:18.100358 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.100366 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.100374 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.100382 | orchestrator | 2025-04-13 00:54:18.100390 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-04-13 00:54:18.100398 | orchestrator | Sunday 13 April 2025 00:50:56 +0000 (0:00:01.983) 0:04:11.518 ********** 2025-04-13 00:54:18.100406 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.100414 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.100422 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.100430 | orchestrator | 2025-04-13 00:54:18.100438 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-04-13 00:54:18.100446 | orchestrator | Sunday 13 April 2025 00:50:56 +0000 (0:00:00.299) 0:04:11.818 ********** 2025-04-13 00:54:18.100453 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.100461 | orchestrator | 2025-04-13 00:54:18.100469 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-04-13 00:54:18.100477 | orchestrator | Sunday 13 April 2025 00:50:58 +0000 (0:00:01.465) 0:04:13.283 ********** 2025-04-13 00:54:18.100485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-13 00:54:18.100495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-13 00:54:18.100508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-13 00:54:18.100648 | orchestrator | 2025-04-13 00:54:18.100668 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-04-13 00:54:18.100681 | orchestrator | Sunday 13 April 2025 00:51:00 +0000 (0:00:01.694) 0:04:14.978 ********** 2025-04-13 00:54:18.100695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-13 00:54:18.100704 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.100713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-13 00:54:18.100721 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.100740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-13 00:54:18.100749 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.100757 | orchestrator | 2025-04-13 00:54:18.100765 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-04-13 00:54:18.100773 | orchestrator | Sunday 13 April 2025 00:51:00 +0000 (0:00:00.589) 0:04:15.567 ********** 2025-04-13 00:54:18.100781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-13 00:54:18.100790 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.100798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-13 00:54:18.100806 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.100814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-13 00:54:18.100828 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.100836 | orchestrator | 2025-04-13 00:54:18.100851 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-04-13 00:54:18.100860 | orchestrator | Sunday 13 April 2025 00:51:01 +0000 (0:00:00.803) 0:04:16.371 ********** 2025-04-13 00:54:18.100868 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.100876 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.100883 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.100891 | orchestrator | 2025-04-13 00:54:18.100899 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-04-13 00:54:18.100907 | orchestrator | Sunday 13 April 2025 00:51:02 +0000 (0:00:00.707) 0:04:17.078 ********** 2025-04-13 00:54:18.100915 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.100923 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.100931 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.100938 | orchestrator | 2025-04-13 00:54:18.100946 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-04-13 00:54:18.100954 | orchestrator | Sunday 13 April 2025 00:51:03 +0000 (0:00:01.817) 0:04:18.896 ********** 2025-04-13 00:54:18.100962 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.100970 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.100978 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.100985 | orchestrator | 2025-04-13 00:54:18.100994 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-04-13 00:54:18.101001 | orchestrator | Sunday 13 April 2025 00:51:04 +0000 (0:00:00.303) 0:04:19.200 ********** 2025-04-13 00:54:18.101009 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.101017 | orchestrator | 2025-04-13 00:54:18.101025 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-04-13 00:54:18.101033 | orchestrator | Sunday 13 April 2025 00:51:05 +0000 (0:00:01.571) 0:04:20.771 ********** 2025-04-13 00:54:18.101041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 00:54:18.101050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 00:54:18.101094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 00:54:18.101156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 00:54:18.101190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 00:54:18.101223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.101240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 00:54:18.101248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 00:54:18.101279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 00:54:18.101292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 00:54:18.101312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 00:54:18.101364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 00:54:18.101382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 00:54:18.101391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 00:54:18.101425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.101444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 00:54:18.101453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 00:54:18.101481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 00:54:18.101494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 00:54:18.101504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 00:54:18.101563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 00:54:18.101583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 00:54:18.101592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 00:54:18.101619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.101639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 00:54:18.101647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 00:54:18.101674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 00:54:18.101682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101691 | orchestrator | 2025-04-13 00:54:18.101699 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-04-13 00:54:18.101707 | orchestrator | Sunday 13 April 2025 00:51:11 +0000 (0:00:05.256) 0:04:26.028 ********** 2025-04-13 00:54:18.101719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 00:54:18.101728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 00:54:18.101777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 00:54:18.101794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 00:54:18.101802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 00:54:18.101840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.101863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 00:54:18.101872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 00:54:18.101886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 00:54:18.101931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 00:54:18.101961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 00:54:18.101970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.101989 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.102001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 00:54:18.102011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 00:54:18.102052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.102066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 00:54:18.102081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.102091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.102100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 00:54:18.102136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.102146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 00:54:18.102184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 00:54:18.102193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.102205 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.102219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 00:54:18.102245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.102261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.102277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.102285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 00:54:18.102302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.102311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 00:54:18.102324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 00:54:18.102332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.102345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 00:54:18.102356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.102370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.102379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 00:54:18.102387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.102400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 00:54:18.102419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 00:54:18.102428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.102436 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.102444 | orchestrator | 2025-04-13 00:54:18.102452 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-04-13 00:54:18.102464 | orchestrator | Sunday 13 April 2025 00:51:13 +0000 (0:00:01.988) 0:04:28.016 ********** 2025-04-13 00:54:18.102472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-13 00:54:18.102480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-13 00:54:18.102488 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.102499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-13 00:54:18.102507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-13 00:54:18.102515 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.102526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-13 00:54:18.102534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-13 00:54:18.102547 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.102555 | orchestrator | 2025-04-13 00:54:18.102563 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-04-13 00:54:18.102571 | orchestrator | Sunday 13 April 2025 00:51:15 +0000 (0:00:02.208) 0:04:30.225 ********** 2025-04-13 00:54:18.102579 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.102586 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.102598 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.102606 | orchestrator | 2025-04-13 00:54:18.102614 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-04-13 00:54:18.102622 | orchestrator | Sunday 13 April 2025 00:51:16 +0000 (0:00:01.659) 0:04:31.885 ********** 2025-04-13 00:54:18.102630 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.102638 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.102649 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.102662 | orchestrator | 2025-04-13 00:54:18.102674 | orchestrator | TASK [include_role : placement] ************************************************ 2025-04-13 00:54:18.102686 | orchestrator | Sunday 13 April 2025 00:51:19 +0000 (0:00:02.469) 0:04:34.354 ********** 2025-04-13 00:54:18.102699 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.102712 | orchestrator | 2025-04-13 00:54:18.102724 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-04-13 00:54:18.102736 | orchestrator | Sunday 13 April 2025 00:51:21 +0000 (0:00:01.607) 0:04:35.961 ********** 2025-04-13 00:54:18.102749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.102761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.102773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.102791 | orchestrator | 2025-04-13 00:54:18.102803 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-04-13 00:54:18.102816 | orchestrator | Sunday 13 April 2025 00:51:24 +0000 (0:00:03.856) 0:04:39.818 ********** 2025-04-13 00:54:18.102845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.102859 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.102871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.102883 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.102895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.102908 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.102921 | orchestrator | 2025-04-13 00:54:18.102934 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-04-13 00:54:18.102947 | orchestrator | Sunday 13 April 2025 00:51:25 +0000 (0:00:00.481) 0:04:40.299 ********** 2025-04-13 00:54:18.102960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-13 00:54:18.102980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-13 00:54:18.102993 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.103005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-13 00:54:18.103018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-13 00:54:18.103031 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.103043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-13 00:54:18.103057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-13 00:54:18.103070 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.103083 | orchestrator | 2025-04-13 00:54:18.103094 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-04-13 00:54:18.103114 | orchestrator | Sunday 13 April 2025 00:51:26 +0000 (0:00:01.249) 0:04:41.548 ********** 2025-04-13 00:54:18.103127 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.103139 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.103151 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.103219 | orchestrator | 2025-04-13 00:54:18.103235 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-04-13 00:54:18.103248 | orchestrator | Sunday 13 April 2025 00:51:28 +0000 (0:00:01.416) 0:04:42.965 ********** 2025-04-13 00:54:18.103261 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.103274 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.103287 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.103300 | orchestrator | 2025-04-13 00:54:18.103311 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-04-13 00:54:18.103319 | orchestrator | Sunday 13 April 2025 00:51:30 +0000 (0:00:02.138) 0:04:45.103 ********** 2025-04-13 00:54:18.103327 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.103335 | orchestrator | 2025-04-13 00:54:18.103343 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-04-13 00:54:18.103351 | orchestrator | Sunday 13 April 2025 00:51:31 +0000 (0:00:01.666) 0:04:46.769 ********** 2025-04-13 00:54:18.103360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.103389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.103398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.103413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.103422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.103430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.103445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.103458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.103471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.103480 | orchestrator | 2025-04-13 00:54:18.103488 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-04-13 00:54:18.103496 | orchestrator | Sunday 13 April 2025 00:51:37 +0000 (0:00:05.486) 0:04:52.256 ********** 2025-04-13 00:54:18.103504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.103518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.103531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.103539 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.103548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.103561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.103570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.103586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.103599 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.103607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.103616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.103624 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.103632 | orchestrator | 2025-04-13 00:54:18.103640 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-04-13 00:54:18.103648 | orchestrator | Sunday 13 April 2025 00:51:38 +0000 (0:00:01.126) 0:04:53.382 ********** 2025-04-13 00:54:18.103656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-13 00:54:18.103668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-13 00:54:18.103677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-13 00:54:18.103686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-13 00:54:18.103694 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.103702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-13 00:54:18.103710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-13 00:54:18.103721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-13 00:54:18.103728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-13 00:54:18.103735 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.103742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-13 00:54:18.103749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-13 00:54:18.103756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-13 00:54:18.103763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-13 00:54:18.103770 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.103777 | orchestrator | 2025-04-13 00:54:18.103784 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-04-13 00:54:18.103791 | orchestrator | Sunday 13 April 2025 00:51:39 +0000 (0:00:01.388) 0:04:54.770 ********** 2025-04-13 00:54:18.103798 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.103805 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.103812 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.103819 | orchestrator | 2025-04-13 00:54:18.103826 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-04-13 00:54:18.103833 | orchestrator | Sunday 13 April 2025 00:51:41 +0000 (0:00:01.453) 0:04:56.224 ********** 2025-04-13 00:54:18.103839 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.103846 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.103853 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.103860 | orchestrator | 2025-04-13 00:54:18.103867 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-04-13 00:54:18.103874 | orchestrator | Sunday 13 April 2025 00:51:43 +0000 (0:00:02.454) 0:04:58.679 ********** 2025-04-13 00:54:18.103881 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.103888 | orchestrator | 2025-04-13 00:54:18.103897 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-04-13 00:54:18.103905 | orchestrator | Sunday 13 April 2025 00:51:45 +0000 (0:00:01.696) 0:05:00.375 ********** 2025-04-13 00:54:18.103911 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-04-13 00:54:18.103919 | orchestrator | 2025-04-13 00:54:18.103926 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-04-13 00:54:18.103933 | orchestrator | Sunday 13 April 2025 00:51:46 +0000 (0:00:01.354) 0:05:01.730 ********** 2025-04-13 00:54:18.103943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-13 00:54:18.103957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-13 00:54:18.103964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-13 00:54:18.103971 | orchestrator | 2025-04-13 00:54:18.103979 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-04-13 00:54:18.103986 | orchestrator | Sunday 13 April 2025 00:51:52 +0000 (0:00:05.249) 0:05:06.980 ********** 2025-04-13 00:54:18.103993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-13 00:54:18.104000 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.104012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-13 00:54:18.104020 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.104027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-13 00:54:18.104034 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.104041 | orchestrator | 2025-04-13 00:54:18.104048 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-04-13 00:54:18.104055 | orchestrator | Sunday 13 April 2025 00:51:54 +0000 (0:00:02.026) 0:05:09.006 ********** 2025-04-13 00:54:18.104062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-13 00:54:18.104069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-13 00:54:18.104077 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.104108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-13 00:54:18.104119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-13 00:54:18.104127 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.104134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-13 00:54:18.104141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-13 00:54:18.104148 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.104155 | orchestrator | 2025-04-13 00:54:18.104177 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-13 00:54:18.104187 | orchestrator | Sunday 13 April 2025 00:51:56 +0000 (0:00:02.036) 0:05:11.043 ********** 2025-04-13 00:54:18.104194 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.104201 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.104208 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.104215 | orchestrator | 2025-04-13 00:54:18.104222 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-13 00:54:18.104229 | orchestrator | Sunday 13 April 2025 00:51:59 +0000 (0:00:03.150) 0:05:14.193 ********** 2025-04-13 00:54:18.104236 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.104243 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.104249 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.104256 | orchestrator | 2025-04-13 00:54:18.104263 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-04-13 00:54:18.104270 | orchestrator | Sunday 13 April 2025 00:52:02 +0000 (0:00:03.673) 0:05:17.866 ********** 2025-04-13 00:54:18.104281 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-04-13 00:54:18.104288 | orchestrator | 2025-04-13 00:54:18.104295 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-04-13 00:54:18.104302 | orchestrator | Sunday 13 April 2025 00:52:04 +0000 (0:00:01.285) 0:05:19.152 ********** 2025-04-13 00:54:18.104309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-13 00:54:18.104316 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.104323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-13 00:54:18.104331 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.104343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-13 00:54:18.104350 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.104357 | orchestrator | 2025-04-13 00:54:18.104364 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-04-13 00:54:18.104371 | orchestrator | Sunday 13 April 2025 00:52:05 +0000 (0:00:01.560) 0:05:20.713 ********** 2025-04-13 00:54:18.104382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-13 00:54:18.104389 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.104403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-13 00:54:18.104410 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.104418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-13 00:54:18.104425 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.104432 | orchestrator | 2025-04-13 00:54:18.104439 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-04-13 00:54:18.104446 | orchestrator | Sunday 13 April 2025 00:52:07 +0000 (0:00:01.797) 0:05:22.511 ********** 2025-04-13 00:54:18.104453 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.104460 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.104470 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.104477 | orchestrator | 2025-04-13 00:54:18.104484 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-13 00:54:18.104490 | orchestrator | Sunday 13 April 2025 00:52:09 +0000 (0:00:02.262) 0:05:24.773 ********** 2025-04-13 00:54:18.104498 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:54:18.104505 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:54:18.104512 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:54:18.104519 | orchestrator | 2025-04-13 00:54:18.104526 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-13 00:54:18.104533 | orchestrator | Sunday 13 April 2025 00:52:12 +0000 (0:00:02.679) 0:05:27.452 ********** 2025-04-13 00:54:18.104540 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:54:18.104547 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:54:18.104554 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:54:18.104576 | orchestrator | 2025-04-13 00:54:18.104583 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-04-13 00:54:18.104590 | orchestrator | Sunday 13 April 2025 00:52:15 +0000 (0:00:03.477) 0:05:30.929 ********** 2025-04-13 00:54:18.104597 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-04-13 00:54:18.104604 | orchestrator | 2025-04-13 00:54:18.104611 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-04-13 00:54:18.104618 | orchestrator | Sunday 13 April 2025 00:52:17 +0000 (0:00:01.529) 0:05:32.459 ********** 2025-04-13 00:54:18.104625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-13 00:54:18.104632 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.104640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-13 00:54:18.104647 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.104658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-13 00:54:18.104665 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.104672 | orchestrator | 2025-04-13 00:54:18.104679 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-04-13 00:54:18.104686 | orchestrator | Sunday 13 April 2025 00:52:19 +0000 (0:00:02.033) 0:05:34.493 ********** 2025-04-13 00:54:18.104694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-13 00:54:18.104701 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.104708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-13 00:54:18.104729 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.104742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-13 00:54:18.104750 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.104757 | orchestrator | 2025-04-13 00:54:18.104764 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-04-13 00:54:18.104771 | orchestrator | Sunday 13 April 2025 00:52:21 +0000 (0:00:01.540) 0:05:36.033 ********** 2025-04-13 00:54:18.104778 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.104785 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.104792 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.104799 | orchestrator | 2025-04-13 00:54:18.104806 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-13 00:54:18.104813 | orchestrator | Sunday 13 April 2025 00:52:23 +0000 (0:00:02.076) 0:05:38.109 ********** 2025-04-13 00:54:18.104820 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:54:18.104827 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:54:18.104834 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:54:18.104841 | orchestrator | 2025-04-13 00:54:18.104848 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-13 00:54:18.104858 | orchestrator | Sunday 13 April 2025 00:52:25 +0000 (0:00:02.817) 0:05:40.927 ********** 2025-04-13 00:54:18.104865 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:54:18.104872 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:54:18.104879 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:54:18.104886 | orchestrator | 2025-04-13 00:54:18.104893 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-04-13 00:54:18.104900 | orchestrator | Sunday 13 April 2025 00:52:29 +0000 (0:00:03.519) 0:05:44.446 ********** 2025-04-13 00:54:18.104907 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.104914 | orchestrator | 2025-04-13 00:54:18.104921 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-04-13 00:54:18.104928 | orchestrator | Sunday 13 April 2025 00:52:31 +0000 (0:00:01.698) 0:05:46.145 ********** 2025-04-13 00:54:18.104938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.104946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-13 00:54:18.104960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-13 00:54:18.104967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-13 00:54:18.104980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.104987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.104995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-13 00:54:18.105005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image'2025-04-13 00:54:18 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:54:18.105121 | orchestrator | 2025-04-13 00:54:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:54:18.105137 | orchestrator | 2025-04-13 00:54:18 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:54:18.105155 | orchestrator | 2025-04-13 00:54:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:54:18.105209 | orchestrator | : 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-13 00:54:18.105222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-13 00:54:18.105230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.105247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.105255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-13 00:54:18.105262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-13 00:54:18.105307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-13 00:54:18.105317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.105324 | orchestrator | 2025-04-13 00:54:18.105331 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-04-13 00:54:18.105338 | orchestrator | Sunday 13 April 2025 00:52:35 +0000 (0:00:04.389) 0:05:50.534 ********** 2025-04-13 00:54:18.105352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.105360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-13 00:54:18.105367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-13 00:54:18.105374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-13 00:54:18.105399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.105407 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.105414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.105427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-13 00:54:18.105435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-13 00:54:18.105442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-13 00:54:18.105449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.105461 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.105481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.105489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-13 00:54:18.105502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-13 00:54:18.105510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-13 00:54:18.105517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-13 00:54:18.105524 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.105531 | orchestrator | 2025-04-13 00:54:18.105538 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-04-13 00:54:18.105549 | orchestrator | Sunday 13 April 2025 00:52:36 +0000 (0:00:00.943) 0:05:51.477 ********** 2025-04-13 00:54:18.105557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-13 00:54:18.105564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-13 00:54:18.105571 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.105579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-13 00:54:18.105586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-13 00:54:18.105593 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.105613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-13 00:54:18.105621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-13 00:54:18.105628 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.105635 | orchestrator | 2025-04-13 00:54:18.105642 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-04-13 00:54:18.105649 | orchestrator | Sunday 13 April 2025 00:52:37 +0000 (0:00:01.366) 0:05:52.844 ********** 2025-04-13 00:54:18.105656 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.105664 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.105670 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.105677 | orchestrator | 2025-04-13 00:54:18.105684 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-04-13 00:54:18.105691 | orchestrator | Sunday 13 April 2025 00:52:39 +0000 (0:00:01.493) 0:05:54.337 ********** 2025-04-13 00:54:18.105698 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.105705 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.105712 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.105719 | orchestrator | 2025-04-13 00:54:18.105728 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-04-13 00:54:18.105740 | orchestrator | Sunday 13 April 2025 00:52:41 +0000 (0:00:02.435) 0:05:56.773 ********** 2025-04-13 00:54:18.105774 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.105782 | orchestrator | 2025-04-13 00:54:18.105790 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-04-13 00:54:18.105798 | orchestrator | Sunday 13 April 2025 00:52:43 +0000 (0:00:01.766) 0:05:58.539 ********** 2025-04-13 00:54:18.105806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-13 00:54:18.105819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-13 00:54:18.105835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-13 00:54:18.105860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-13 00:54:18.105870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-13 00:54:18.105878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-13 00:54:18.105897 | orchestrator | 2025-04-13 00:54:18.105905 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-04-13 00:54:18.105912 | orchestrator | Sunday 13 April 2025 00:52:50 +0000 (0:00:06.439) 0:06:04.979 ********** 2025-04-13 00:54:18.105933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-13 00:54:18.105942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-13 00:54:18.105950 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.105958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-13 00:54:18.105979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-13 00:54:18.105987 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.105995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-13 00:54:18.106038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-13 00:54:18.106056 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.106064 | orchestrator | 2025-04-13 00:54:18.106072 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-04-13 00:54:18.106080 | orchestrator | Sunday 13 April 2025 00:52:50 +0000 (0:00:00.905) 0:06:05.884 ********** 2025-04-13 00:54:18.106088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-13 00:54:18.106096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-13 00:54:18.106108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-13 00:54:18.106116 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.106123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-13 00:54:18.106130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-13 00:54:18.106137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-13 00:54:18.106144 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.106155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-13 00:54:18.106176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-13 00:54:18.106183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-13 00:54:18.106191 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.106198 | orchestrator | 2025-04-13 00:54:18.106205 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-04-13 00:54:18.106212 | orchestrator | Sunday 13 April 2025 00:52:52 +0000 (0:00:01.383) 0:06:07.268 ********** 2025-04-13 00:54:18.106219 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.106225 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.106232 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.106239 | orchestrator | 2025-04-13 00:54:18.106246 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-04-13 00:54:18.106253 | orchestrator | Sunday 13 April 2025 00:52:52 +0000 (0:00:00.456) 0:06:07.725 ********** 2025-04-13 00:54:18.106260 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.106267 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.106274 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.106281 | orchestrator | 2025-04-13 00:54:18.106288 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-04-13 00:54:18.106295 | orchestrator | Sunday 13 April 2025 00:52:54 +0000 (0:00:01.679) 0:06:09.404 ********** 2025-04-13 00:54:18.106318 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.106326 | orchestrator | 2025-04-13 00:54:18.106333 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-04-13 00:54:18.106340 | orchestrator | Sunday 13 April 2025 00:52:56 +0000 (0:00:01.845) 0:06:11.250 ********** 2025-04-13 00:54:18.106348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-13 00:54:18.106359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-13 00:54:18.106367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-13 00:54:18.106390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-13 00:54:18.106414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-13 00:54:18.106422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-13 00:54:18.106455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-13 00:54:18.106462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-13 00:54:18.106470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-13 00:54:18.106515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-13 00:54:18.106523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 00:54:18.106530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 00:54:18.106565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-13 00:54:18.106590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 00:54:18.106597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 00:54:18.106626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-13 00:54:18.106647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 00:54:18.106654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 00:54:18.106691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106699 | orchestrator | 2025-04-13 00:54:18.106706 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-04-13 00:54:18.106713 | orchestrator | Sunday 13 April 2025 00:53:01 +0000 (0:00:04.807) 0:06:16.057 ********** 2025-04-13 00:54:18.106720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-13 00:54:18.106727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-13 00:54:18.106735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-13 00:54:18.106766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-13 00:54:18.106777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 00:54:18.106784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 00:54:18.106806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106817 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.106832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-13 00:54:18.106840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-13 00:54:18.106847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-13 00:54:18.106874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-13 00:54:18.106889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 00:54:18.106897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 00:54:18.106919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106926 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.106938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-13 00:54:18.106950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-13 00:54:18.106957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.106975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-13 00:54:18.106982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-13 00:54:18.106995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 00:54:18.107006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.107013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.107024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 00:54:18.107031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 00:54:18.107039 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.107106 | orchestrator | 2025-04-13 00:54:18.107114 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-04-13 00:54:18.107121 | orchestrator | Sunday 13 April 2025 00:53:02 +0000 (0:00:01.743) 0:06:17.801 ********** 2025-04-13 00:54:18.107128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-13 00:54:18.107136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-13 00:54:18.107144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-13 00:54:18.107151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-13 00:54:18.107237 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.107248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-13 00:54:18.107255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-13 00:54:18.107267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-13 00:54:18.107275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-13 00:54:18.107282 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.107303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-13 00:54:18.107315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-13 00:54:18.107325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-13 00:54:18.107336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-13 00:54:18.107343 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.107350 | orchestrator | 2025-04-13 00:54:18.107358 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-04-13 00:54:18.107365 | orchestrator | Sunday 13 April 2025 00:53:04 +0000 (0:00:01.682) 0:06:19.483 ********** 2025-04-13 00:54:18.107372 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.107378 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.107385 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.107392 | orchestrator | 2025-04-13 00:54:18.107399 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-04-13 00:54:18.107406 | orchestrator | Sunday 13 April 2025 00:53:05 +0000 (0:00:00.757) 0:06:20.240 ********** 2025-04-13 00:54:18.107413 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.107420 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.107427 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.107434 | orchestrator | 2025-04-13 00:54:18.107441 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-04-13 00:54:18.107448 | orchestrator | Sunday 13 April 2025 00:53:07 +0000 (0:00:01.805) 0:06:22.045 ********** 2025-04-13 00:54:18.107455 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.107462 | orchestrator | 2025-04-13 00:54:18.107469 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-04-13 00:54:18.107476 | orchestrator | Sunday 13 April 2025 00:53:09 +0000 (0:00:01.979) 0:06:24.025 ********** 2025-04-13 00:54:18.107484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-13 00:54:18.107496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-13 00:54:18.107507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-13 00:54:18.107514 | orchestrator | 2025-04-13 00:54:18.107522 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-04-13 00:54:18.107529 | orchestrator | Sunday 13 April 2025 00:53:12 +0000 (0:00:03.263) 0:06:27.288 ********** 2025-04-13 00:54:18.107536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-13 00:54:18.107543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-13 00:54:18.107570 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.107577 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.107585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-13 00:54:18.107592 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.107599 | orchestrator | 2025-04-13 00:54:18.107606 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-04-13 00:54:18.107613 | orchestrator | Sunday 13 April 2025 00:53:12 +0000 (0:00:00.397) 0:06:27.686 ********** 2025-04-13 00:54:18.107620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-13 00:54:18.107627 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.107634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-13 00:54:18.107642 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.107651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-13 00:54:18.107657 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.107663 | orchestrator | 2025-04-13 00:54:18.107670 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-04-13 00:54:18.107676 | orchestrator | Sunday 13 April 2025 00:53:13 +0000 (0:00:01.199) 0:06:28.886 ********** 2025-04-13 00:54:18.107682 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.107689 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.107695 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.107701 | orchestrator | 2025-04-13 00:54:18.107707 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-04-13 00:54:18.107713 | orchestrator | Sunday 13 April 2025 00:53:14 +0000 (0:00:00.466) 0:06:29.352 ********** 2025-04-13 00:54:18.107719 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.107725 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.107732 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.107738 | orchestrator | 2025-04-13 00:54:18.107747 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-04-13 00:54:18.107753 | orchestrator | Sunday 13 April 2025 00:53:16 +0000 (0:00:01.794) 0:06:31.147 ********** 2025-04-13 00:54:18.107760 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:54:18.107766 | orchestrator | 2025-04-13 00:54:18.107772 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-04-13 00:54:18.107778 | orchestrator | Sunday 13 April 2025 00:53:18 +0000 (0:00:01.928) 0:06:33.076 ********** 2025-04-13 00:54:18.107784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.107792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.107799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.107809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.107820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.107827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-13 00:54:18.107833 | orchestrator | 2025-04-13 00:54:18.107839 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-04-13 00:54:18.107846 | orchestrator | Sunday 13 April 2025 00:53:26 +0000 (0:00:07.948) 0:06:41.024 ********** 2025-04-13 00:54:18.107852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.107861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.107871 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.107877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.107884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.107890 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.107896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.107906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-13 00:54:18.107916 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.107922 | orchestrator | 2025-04-13 00:54:18.107928 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-04-13 00:54:18.107935 | orchestrator | Sunday 13 April 2025 00:53:27 +0000 (0:00:00.952) 0:06:41.977 ********** 2025-04-13 00:54:18.107941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-13 00:54:18.107947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-13 00:54:18.107954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-13 00:54:18.107960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-13 00:54:18.107966 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.107973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-13 00:54:18.107979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-13 00:54:18.107985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-13 00:54:18.107991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-13 00:54:18.107997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-13 00:54:18.108004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-13 00:54:18.108010 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.108016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-13 00:54:18.108023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-13 00:54:18.108029 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.108035 | orchestrator | 2025-04-13 00:54:18.108041 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-04-13 00:54:18.108047 | orchestrator | Sunday 13 April 2025 00:53:28 +0000 (0:00:01.463) 0:06:43.441 ********** 2025-04-13 00:54:18.108057 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.108063 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.108069 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.108076 | orchestrator | 2025-04-13 00:54:18.108082 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-04-13 00:54:18.108091 | orchestrator | Sunday 13 April 2025 00:53:30 +0000 (0:00:01.530) 0:06:44.971 ********** 2025-04-13 00:54:18.108097 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.108103 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.108109 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.108116 | orchestrator | 2025-04-13 00:54:18.108122 | orchestrator | TASK [include_role : swift] **************************************************** 2025-04-13 00:54:18.108128 | orchestrator | Sunday 13 April 2025 00:53:32 +0000 (0:00:02.591) 0:06:47.562 ********** 2025-04-13 00:54:18.108134 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.108140 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.108149 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.108155 | orchestrator | 2025-04-13 00:54:18.108174 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-04-13 00:54:18.108185 | orchestrator | Sunday 13 April 2025 00:53:33 +0000 (0:00:00.568) 0:06:48.131 ********** 2025-04-13 00:54:18.108196 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.108206 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.108217 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.108223 | orchestrator | 2025-04-13 00:54:18.108229 | orchestrator | TASK [include_role : trove] **************************************************** 2025-04-13 00:54:18.108236 | orchestrator | Sunday 13 April 2025 00:53:33 +0000 (0:00:00.337) 0:06:48.469 ********** 2025-04-13 00:54:18.108242 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.108248 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.108254 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.108260 | orchestrator | 2025-04-13 00:54:18.108266 | orchestrator | TASK [include_role : venus] **************************************************** 2025-04-13 00:54:18.108272 | orchestrator | Sunday 13 April 2025 00:53:34 +0000 (0:00:00.605) 0:06:49.075 ********** 2025-04-13 00:54:18.108278 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.108285 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.108291 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.108297 | orchestrator | 2025-04-13 00:54:18.108303 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-04-13 00:54:18.108309 | orchestrator | Sunday 13 April 2025 00:53:34 +0000 (0:00:00.600) 0:06:49.675 ********** 2025-04-13 00:54:18.108315 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.108321 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.108327 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.108333 | orchestrator | 2025-04-13 00:54:18.108339 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-04-13 00:54:18.108345 | orchestrator | Sunday 13 April 2025 00:53:35 +0000 (0:00:00.580) 0:06:50.256 ********** 2025-04-13 00:54:18.108351 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.108357 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.108363 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.108370 | orchestrator | 2025-04-13 00:54:18.108376 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-04-13 00:54:18.108382 | orchestrator | Sunday 13 April 2025 00:53:36 +0000 (0:00:00.793) 0:06:51.049 ********** 2025-04-13 00:54:18.108388 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:54:18.108395 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:54:18.108401 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:54:18.108411 | orchestrator | 2025-04-13 00:54:18.108418 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-04-13 00:54:18.108424 | orchestrator | Sunday 13 April 2025 00:53:37 +0000 (0:00:00.942) 0:06:51.992 ********** 2025-04-13 00:54:18.108430 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:54:18.108441 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:54:18.108447 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:54:18.108453 | orchestrator | 2025-04-13 00:54:18.108460 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-04-13 00:54:18.108466 | orchestrator | Sunday 13 April 2025 00:53:37 +0000 (0:00:00.375) 0:06:52.367 ********** 2025-04-13 00:54:18.108472 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:54:18.108481 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:54:18.108490 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:54:18.108499 | orchestrator | 2025-04-13 00:54:18.108507 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-04-13 00:54:18.108517 | orchestrator | Sunday 13 April 2025 00:53:38 +0000 (0:00:01.310) 0:06:53.678 ********** 2025-04-13 00:54:18.108527 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:54:18.108536 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:54:18.108547 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:54:18.108554 | orchestrator | 2025-04-13 00:54:18.108561 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-04-13 00:54:18.108567 | orchestrator | Sunday 13 April 2025 00:53:40 +0000 (0:00:01.307) 0:06:54.986 ********** 2025-04-13 00:54:18.108573 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:54:18.108579 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:54:18.108585 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:54:18.108592 | orchestrator | 2025-04-13 00:54:18.108598 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-04-13 00:54:18.108604 | orchestrator | Sunday 13 April 2025 00:53:41 +0000 (0:00:01.238) 0:06:56.225 ********** 2025-04-13 00:54:18.108610 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.108616 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.108623 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.108629 | orchestrator | 2025-04-13 00:54:18.108635 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-04-13 00:54:18.108641 | orchestrator | Sunday 13 April 2025 00:53:46 +0000 (0:00:05.036) 0:07:01.262 ********** 2025-04-13 00:54:18.108647 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:54:18.108653 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:54:18.108660 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:54:18.108666 | orchestrator | 2025-04-13 00:54:18.108672 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-04-13 00:54:18.108678 | orchestrator | Sunday 13 April 2025 00:53:49 +0000 (0:00:03.045) 0:07:04.307 ********** 2025-04-13 00:54:18.108684 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.108690 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.108696 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.108702 | orchestrator | 2025-04-13 00:54:18.108708 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-04-13 00:54:18.108714 | orchestrator | Sunday 13 April 2025 00:53:58 +0000 (0:00:08.702) 0:07:13.010 ********** 2025-04-13 00:54:18.108721 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:54:18.108727 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:54:18.108733 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:54:18.108739 | orchestrator | 2025-04-13 00:54:18.108745 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-04-13 00:54:18.108754 | orchestrator | Sunday 13 April 2025 00:54:00 +0000 (0:00:02.446) 0:07:15.456 ********** 2025-04-13 00:54:18.108760 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:54:18.108767 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:54:18.108773 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:54:18.108779 | orchestrator | 2025-04-13 00:54:18.108789 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-04-13 00:54:18.108795 | orchestrator | Sunday 13 April 2025 00:54:09 +0000 (0:00:09.359) 0:07:24.816 ********** 2025-04-13 00:54:18.108801 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.108807 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.108813 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.108827 | orchestrator | 2025-04-13 00:54:18.108833 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-04-13 00:54:18.108839 | orchestrator | Sunday 13 April 2025 00:54:10 +0000 (0:00:00.621) 0:07:25.437 ********** 2025-04-13 00:54:18.108846 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.108852 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.108858 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.108864 | orchestrator | 2025-04-13 00:54:18.108870 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-04-13 00:54:18.108876 | orchestrator | Sunday 13 April 2025 00:54:11 +0000 (0:00:00.636) 0:07:26.074 ********** 2025-04-13 00:54:18.108882 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.108889 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.108895 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.108901 | orchestrator | 2025-04-13 00:54:18.108907 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-04-13 00:54:18.108913 | orchestrator | Sunday 13 April 2025 00:54:11 +0000 (0:00:00.644) 0:07:26.719 ********** 2025-04-13 00:54:18.108919 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.108926 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.108932 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.108938 | orchestrator | 2025-04-13 00:54:18.108944 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-04-13 00:54:18.108950 | orchestrator | Sunday 13 April 2025 00:54:12 +0000 (0:00:00.347) 0:07:27.066 ********** 2025-04-13 00:54:18.108956 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.108963 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.108969 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.108975 | orchestrator | 2025-04-13 00:54:18.108981 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-04-13 00:54:18.108987 | orchestrator | Sunday 13 April 2025 00:54:12 +0000 (0:00:00.614) 0:07:27.681 ********** 2025-04-13 00:54:18.108993 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:54:18.109000 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:54:18.109006 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:54:18.109012 | orchestrator | 2025-04-13 00:54:18.109018 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-04-13 00:54:18.109024 | orchestrator | Sunday 13 April 2025 00:54:13 +0000 (0:00:00.704) 0:07:28.386 ********** 2025-04-13 00:54:18.109030 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:54:18.109036 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:54:18.109043 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:54:18.109049 | orchestrator | 2025-04-13 00:54:18.109055 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-04-13 00:54:18.109061 | orchestrator | Sunday 13 April 2025 00:54:14 +0000 (0:00:01.149) 0:07:29.536 ********** 2025-04-13 00:54:18.109067 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:54:18.109073 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:54:18.109082 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:54:18.109089 | orchestrator | 2025-04-13 00:54:18.109095 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:54:18.109101 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-04-13 00:54:18.109108 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-04-13 00:54:18.109114 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-04-13 00:54:18.109120 | orchestrator | 2025-04-13 00:54:18.109126 | orchestrator | 2025-04-13 00:54:18.109132 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 00:54:18.109139 | orchestrator | Sunday 13 April 2025 00:54:15 +0000 (0:00:01.213) 0:07:30.749 ********** 2025-04-13 00:54:18.109149 | orchestrator | =============================================================================== 2025-04-13 00:54:18.109155 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.36s 2025-04-13 00:54:18.109173 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.70s 2025-04-13 00:54:18.109179 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.95s 2025-04-13 00:54:18.109185 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 7.84s 2025-04-13 00:54:18.109191 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 7.16s 2025-04-13 00:54:18.109198 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.44s 2025-04-13 00:54:18.109204 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 5.76s 2025-04-13 00:54:18.109210 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 5.60s 2025-04-13 00:54:18.109216 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.49s 2025-04-13 00:54:18.109222 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 5.31s 2025-04-13 00:54:18.109228 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 5.31s 2025-04-13 00:54:18.109239 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.26s 2025-04-13 00:54:18.109245 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.25s 2025-04-13 00:54:18.109252 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.23s 2025-04-13 00:54:18.109260 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 5.22s 2025-04-13 00:54:21.120522 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.04s 2025-04-13 00:54:21.120682 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.04s 2025-04-13 00:54:21.120703 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.81s 2025-04-13 00:54:21.120718 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.78s 2025-04-13 00:54:21.120732 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.72s 2025-04-13 00:54:21.120765 | orchestrator | 2025-04-13 00:54:21 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:54:21.121542 | orchestrator | 2025-04-13 00:54:21 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:54:21.123591 | orchestrator | 2025-04-13 00:54:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:54:21.124061 | orchestrator | 2025-04-13 00:54:21 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:54:24.160528 | orchestrator | 2025-04-13 00:54:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:54:24.160674 | orchestrator | 2025-04-13 00:54:24 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:54:24.161647 | orchestrator | 2025-04-13 00:54:24 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:54:24.166238 | orchestrator | 2025-04-13 00:54:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:54:24.167107 | orchestrator | 2025-04-13 00:54:24 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:54:27.207457 | orchestrator | 2025-04-13 00:54:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:54:27.207600 | orchestrator | 2025-04-13 00:54:27 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:54:27.211486 | orchestrator | 2025-04-13 00:54:27 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:54:27.212153 | orchestrator | 2025-04-13 00:54:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:54:27.213109 | orchestrator | 2025-04-13 00:54:27 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:54:30.258316 | orchestrator | 2025-04-13 00:54:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:54:30.258427 | orchestrator | 2025-04-13 00:54:30 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:54:30.262061 | orchestrator | 2025-04-13 00:54:30 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:54:30.262442 | orchestrator | 2025-04-13 00:54:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:54:30.268740 | orchestrator | 2025-04-13 00:54:30 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:54:33.315698 | orchestrator | 2025-04-13 00:54:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:54:33.315817 | orchestrator | 2025-04-13 00:54:33 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:54:33.317033 | orchestrator | 2025-04-13 00:54:33 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:54:33.317059 | orchestrator | 2025-04-13 00:54:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:54:36.350689 | orchestrator | 2025-04-13 00:54:33 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:54:36.350818 | orchestrator | 2025-04-13 00:54:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:54:36.350855 | orchestrator | 2025-04-13 00:54:36 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:54:36.351435 | orchestrator | 2025-04-13 00:54:36 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:54:36.352918 | orchestrator | 2025-04-13 00:54:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:54:36.354730 | orchestrator | 2025-04-13 00:54:36 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:54:39.405544 | orchestrator | 2025-04-13 00:54:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:54:39.405689 | orchestrator | 2025-04-13 00:54:39 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:54:39.406145 | orchestrator | 2025-04-13 00:54:39 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:54:39.409503 | orchestrator | 2025-04-13 00:54:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:54:39.412850 | orchestrator | 2025-04-13 00:54:39 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:54:42.449708 | orchestrator | 2025-04-13 00:54:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:54:42.449859 | orchestrator | 2025-04-13 00:54:42 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:54:42.450245 | orchestrator | 2025-04-13 00:54:42 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:54:42.453873 | orchestrator | 2025-04-13 00:54:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:54:42.454472 | orchestrator | 2025-04-13 00:54:42 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:54:42.454719 | orchestrator | 2025-04-13 00:54:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:54:45.499551 | orchestrator | 2025-04-13 00:54:45 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:54:45.501065 | orchestrator | 2025-04-13 00:54:45 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:54:45.502801 | orchestrator | 2025-04-13 00:54:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:54:45.504338 | orchestrator | 2025-04-13 00:54:45 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:54:48.550155 | orchestrator | 2025-04-13 00:54:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:54:48.550344 | orchestrator | 2025-04-13 00:54:48 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:54:48.551033 | orchestrator | 2025-04-13 00:54:48 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:54:48.552491 | orchestrator | 2025-04-13 00:54:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:54:48.553568 | orchestrator | 2025-04-13 00:54:48 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:54:51.618571 | orchestrator | 2025-04-13 00:54:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:54:51.618712 | orchestrator | 2025-04-13 00:54:51 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:54:51.619464 | orchestrator | 2025-04-13 00:54:51 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:54:51.621201 | orchestrator | 2025-04-13 00:54:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:54:51.622729 | orchestrator | 2025-04-13 00:54:51 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:54:54.667072 | orchestrator | 2025-04-13 00:54:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:54:54.667332 | orchestrator | 2025-04-13 00:54:54 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:54:54.669067 | orchestrator | 2025-04-13 00:54:54 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:54:54.670143 | orchestrator | 2025-04-13 00:54:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:54:54.671295 | orchestrator | 2025-04-13 00:54:54 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:54:54.671626 | orchestrator | 2025-04-13 00:54:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:54:57.713716 | orchestrator | 2025-04-13 00:54:57 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:54:57.714499 | orchestrator | 2025-04-13 00:54:57 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:54:57.715558 | orchestrator | 2025-04-13 00:54:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:54:57.716488 | orchestrator | 2025-04-13 00:54:57 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:55:00.760799 | orchestrator | 2025-04-13 00:54:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:55:00.760915 | orchestrator | 2025-04-13 00:55:00 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:55:00.761409 | orchestrator | 2025-04-13 00:55:00 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:55:03.799708 | orchestrator | 2025-04-13 00:55:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:55:03.799825 | orchestrator | 2025-04-13 00:55:00 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:55:03.799867 | orchestrator | 2025-04-13 00:55:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:55:03.799898 | orchestrator | 2025-04-13 00:55:03 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:55:03.800822 | orchestrator | 2025-04-13 00:55:03 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:55:03.802385 | orchestrator | 2025-04-13 00:55:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:55:03.803071 | orchestrator | 2025-04-13 00:55:03 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:55:06.852527 | orchestrator | 2025-04-13 00:55:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:55:06.852696 | orchestrator | 2025-04-13 00:55:06 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:55:06.852760 | orchestrator | 2025-04-13 00:55:06 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:55:06.852770 | orchestrator | 2025-04-13 00:55:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:55:06.852792 | orchestrator | 2025-04-13 00:55:06 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:55:06.852803 | orchestrator | 2025-04-13 00:55:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:55:09.905584 | orchestrator | 2025-04-13 00:55:09 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:55:09.908969 | orchestrator | 2025-04-13 00:55:09 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:55:09.909306 | orchestrator | 2025-04-13 00:55:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:55:09.910196 | orchestrator | 2025-04-13 00:55:09 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:55:12.964109 | orchestrator | 2025-04-13 00:55:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:55:12.964298 | orchestrator | 2025-04-13 00:55:12 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:55:12.965245 | orchestrator | 2025-04-13 00:55:12 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:55:12.967788 | orchestrator | 2025-04-13 00:55:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:55:12.969038 | orchestrator | 2025-04-13 00:55:12 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:55:12.969288 | orchestrator | 2025-04-13 00:55:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:55:16.036706 | orchestrator | 2025-04-13 00:55:16 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:55:16.037096 | orchestrator | 2025-04-13 00:55:16 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:55:16.037662 | orchestrator | 2025-04-13 00:55:16 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:55:16.040384 | orchestrator | 2025-04-13 00:55:16 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:55:19.085704 | orchestrator | 2025-04-13 00:55:16 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:55:19.085968 | orchestrator | 2025-04-13 00:55:19 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:55:19.086902 | orchestrator | 2025-04-13 00:55:19 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:55:19.089032 | orchestrator | 2025-04-13 00:55:19 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:55:19.089494 | orchestrator | 2025-04-13 00:55:19 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:55:22.131134 | orchestrator | 2025-04-13 00:55:19 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:55:22.131319 | orchestrator | 2025-04-13 00:55:22 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:55:22.133382 | orchestrator | 2025-04-13 00:55:22 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:55:22.134938 | orchestrator | 2025-04-13 00:55:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:55:22.137928 | orchestrator | 2025-04-13 00:55:22 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:55:25.193752 | orchestrator | 2025-04-13 00:55:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:55:25.193862 | orchestrator | 2025-04-13 00:55:25 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:55:25.194105 | orchestrator | 2025-04-13 00:55:25 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:55:25.194143 | orchestrator | 2025-04-13 00:55:25 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:55:25.195178 | orchestrator | 2025-04-13 00:55:25 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:55:28.252654 | orchestrator | 2025-04-13 00:55:25 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:55:28.252813 | orchestrator | 2025-04-13 00:55:28 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:55:28.254106 | orchestrator | 2025-04-13 00:55:28 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:55:28.255640 | orchestrator | 2025-04-13 00:55:28 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:55:28.258700 | orchestrator | 2025-04-13 00:55:28 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:55:31.310382 | orchestrator | 2025-04-13 00:55:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:55:31.310646 | orchestrator | 2025-04-13 00:55:31 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:55:31.311721 | orchestrator | 2025-04-13 00:55:31 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:55:31.311769 | orchestrator | 2025-04-13 00:55:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:55:31.312629 | orchestrator | 2025-04-13 00:55:31 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:55:34.373566 | orchestrator | 2025-04-13 00:55:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:55:34.373826 | orchestrator | 2025-04-13 00:55:34 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:55:37.419885 | orchestrator | 2025-04-13 00:55:34 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:55:37.421675 | orchestrator | 2025-04-13 00:55:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:55:37.421701 | orchestrator | 2025-04-13 00:55:34 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:55:37.421717 | orchestrator | 2025-04-13 00:55:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:55:37.421764 | orchestrator | 2025-04-13 00:55:37 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:55:37.422396 | orchestrator | 2025-04-13 00:55:37 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:55:37.422426 | orchestrator | 2025-04-13 00:55:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:55:37.422448 | orchestrator | 2025-04-13 00:55:37 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:55:40.475778 | orchestrator | 2025-04-13 00:55:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:55:40.475919 | orchestrator | 2025-04-13 00:55:40 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:55:40.476313 | orchestrator | 2025-04-13 00:55:40 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:55:40.477147 | orchestrator | 2025-04-13 00:55:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:55:40.478355 | orchestrator | 2025-04-13 00:55:40 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:55:43.532988 | orchestrator | 2025-04-13 00:55:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:55:43.533128 | orchestrator | 2025-04-13 00:55:43 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:55:43.534256 | orchestrator | 2025-04-13 00:55:43 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:55:43.536279 | orchestrator | 2025-04-13 00:55:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:55:43.537719 | orchestrator | 2025-04-13 00:55:43 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:55:43.537873 | orchestrator | 2025-04-13 00:55:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:55:46.572214 | orchestrator | 2025-04-13 00:55:46 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:55:46.572479 | orchestrator | 2025-04-13 00:55:46 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:55:46.573427 | orchestrator | 2025-04-13 00:55:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:55:46.574203 | orchestrator | 2025-04-13 00:55:46 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:55:49.620877 | orchestrator | 2025-04-13 00:55:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:55:49.622133 | orchestrator | 2025-04-13 00:55:49 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:55:49.623317 | orchestrator | 2025-04-13 00:55:49 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:55:49.623364 | orchestrator | 2025-04-13 00:55:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:55:49.624557 | orchestrator | 2025-04-13 00:55:49 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:55:52.689392 | orchestrator | 2025-04-13 00:55:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:55:52.692506 | orchestrator | 2025-04-13 00:55:52 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:55:55.746738 | orchestrator | 2025-04-13 00:55:52 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:55:55.746860 | orchestrator | 2025-04-13 00:55:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:55:55.746878 | orchestrator | 2025-04-13 00:55:52 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:55:55.746922 | orchestrator | 2025-04-13 00:55:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:55:55.746955 | orchestrator | 2025-04-13 00:55:55 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:55:55.750687 | orchestrator | 2025-04-13 00:55:55 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:55:55.751403 | orchestrator | 2025-04-13 00:55:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:55:55.753270 | orchestrator | 2025-04-13 00:55:55 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:55:58.799675 | orchestrator | 2025-04-13 00:55:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:55:58.799811 | orchestrator | 2025-04-13 00:55:58 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:55:58.803995 | orchestrator | 2025-04-13 00:55:58 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:55:58.804353 | orchestrator | 2025-04-13 00:55:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:55:58.804383 | orchestrator | 2025-04-13 00:55:58 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:55:58.804700 | orchestrator | 2025-04-13 00:55:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:56:01.847829 | orchestrator | 2025-04-13 00:56:01 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:56:01.848768 | orchestrator | 2025-04-13 00:56:01 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:56:01.851268 | orchestrator | 2025-04-13 00:56:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:56:01.852032 | orchestrator | 2025-04-13 00:56:01 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:56:04.898916 | orchestrator | 2025-04-13 00:56:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:56:04.899043 | orchestrator | 2025-04-13 00:56:04 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:56:04.900734 | orchestrator | 2025-04-13 00:56:04 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:56:04.902586 | orchestrator | 2025-04-13 00:56:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:56:04.903941 | orchestrator | 2025-04-13 00:56:04 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:56:07.955466 | orchestrator | 2025-04-13 00:56:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:56:07.955622 | orchestrator | 2025-04-13 00:56:07 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:56:07.958217 | orchestrator | 2025-04-13 00:56:07 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:56:07.961278 | orchestrator | 2025-04-13 00:56:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:56:07.963565 | orchestrator | 2025-04-13 00:56:07 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:56:11.016504 | orchestrator | 2025-04-13 00:56:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:56:11.016636 | orchestrator | 2025-04-13 00:56:11 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:56:11.018130 | orchestrator | 2025-04-13 00:56:11 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:56:11.021074 | orchestrator | 2025-04-13 00:56:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:56:11.023914 | orchestrator | 2025-04-13 00:56:11 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:56:14.072309 | orchestrator | 2025-04-13 00:56:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:56:14.072455 | orchestrator | 2025-04-13 00:56:14 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:56:14.074492 | orchestrator | 2025-04-13 00:56:14 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:56:14.077067 | orchestrator | 2025-04-13 00:56:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:56:14.078783 | orchestrator | 2025-04-13 00:56:14 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:56:17.131988 | orchestrator | 2025-04-13 00:56:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:56:17.132131 | orchestrator | 2025-04-13 00:56:17 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:56:17.133856 | orchestrator | 2025-04-13 00:56:17 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:56:17.136009 | orchestrator | 2025-04-13 00:56:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:56:17.138089 | orchestrator | 2025-04-13 00:56:17 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:56:20.178599 | orchestrator | 2025-04-13 00:56:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:56:20.178750 | orchestrator | 2025-04-13 00:56:20 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:56:20.179491 | orchestrator | 2025-04-13 00:56:20 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:56:20.182144 | orchestrator | 2025-04-13 00:56:20 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:56:20.184015 | orchestrator | 2025-04-13 00:56:20 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:56:23.238514 | orchestrator | 2025-04-13 00:56:20 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:56:23.238655 | orchestrator | 2025-04-13 00:56:23 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:56:23.239976 | orchestrator | 2025-04-13 00:56:23 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state STARTED 2025-04-13 00:56:23.241685 | orchestrator | 2025-04-13 00:56:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:56:23.243415 | orchestrator | 2025-04-13 00:56:23 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:56:23.246115 | orchestrator | 2025-04-13 00:56:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:56:26.293516 | orchestrator | 2025-04-13 00:56:26 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:56:26.299090 | orchestrator | 2025-04-13 00:56:26.299204 | orchestrator | 2025-04-13 00:56:26.299225 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 00:56:26.299241 | orchestrator | 2025-04-13 00:56:26.299255 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 00:56:26.299270 | orchestrator | Sunday 13 April 2025 00:54:19 +0000 (0:00:00.317) 0:00:00.317 ********** 2025-04-13 00:56:26.299284 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:56:26.299299 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:56:26.299313 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:56:26.299327 | orchestrator | 2025-04-13 00:56:26.299341 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 00:56:26.299381 | orchestrator | Sunday 13 April 2025 00:54:20 +0000 (0:00:00.398) 0:00:00.715 ********** 2025-04-13 00:56:26.299395 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-04-13 00:56:26.299410 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-04-13 00:56:26.299423 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-04-13 00:56:26.299437 | orchestrator | 2025-04-13 00:56:26.299450 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-04-13 00:56:26.299464 | orchestrator | 2025-04-13 00:56:26.299478 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-13 00:56:26.299491 | orchestrator | Sunday 13 April 2025 00:54:20 +0000 (0:00:00.286) 0:00:01.002 ********** 2025-04-13 00:56:26.299555 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:56:26.299584 | orchestrator | 2025-04-13 00:56:26.299607 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-04-13 00:56:26.299627 | orchestrator | Sunday 13 April 2025 00:54:21 +0000 (0:00:00.699) 0:00:01.701 ********** 2025-04-13 00:56:26.299650 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-13 00:56:26.299673 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-13 00:56:26.299697 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-13 00:56:26.299720 | orchestrator | 2025-04-13 00:56:26.299739 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-04-13 00:56:26.299753 | orchestrator | Sunday 13 April 2025 00:54:21 +0000 (0:00:00.781) 0:00:02.483 ********** 2025-04-13 00:56:26.299789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-13 00:56:26.299810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-13 00:56:26.299856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-13 00:56:26.299885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-13 00:56:26.299902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-13 00:56:26.299918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-13 00:56:26.299932 | orchestrator | 2025-04-13 00:56:26.299946 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-13 00:56:26.299960 | orchestrator | Sunday 13 April 2025 00:54:23 +0000 (0:00:01.595) 0:00:04.079 ********** 2025-04-13 00:56:26.299974 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:56:26.299988 | orchestrator | 2025-04-13 00:56:26.300008 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-04-13 00:56:26.300023 | orchestrator | Sunday 13 April 2025 00:54:24 +0000 (0:00:00.749) 0:00:04.828 ********** 2025-04-13 00:56:26.300046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-13 00:56:26.300062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-13 00:56:26.300077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-13 00:56:26.301573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-13 00:56:26.301658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-13 00:56:26.301689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-13 00:56:26.301703 | orchestrator | 2025-04-13 00:56:26.301715 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-04-13 00:56:26.301729 | orchestrator | Sunday 13 April 2025 00:54:27 +0000 (0:00:03.356) 0:00:08.184 ********** 2025-04-13 00:56:26.301742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-13 00:56:26.301756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-13 00:56:26.301776 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:56:26.301798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-13 00:56:26.301812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-13 00:56:26.301826 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:56:26.301839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-13 00:56:26.301852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-13 00:56:26.301872 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:56:26.301884 | orchestrator | 2025-04-13 00:56:26.301897 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-04-13 00:56:26.301919 | orchestrator | Sunday 13 April 2025 00:54:28 +0000 (0:00:01.234) 0:00:09.419 ********** 2025-04-13 00:56:26.301938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-13 00:56:26.301952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-13 00:56:26.301965 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:56:26.301978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-13 00:56:26.301992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-13 00:56:26.303404 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:56:26.303443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-13 00:56:26.303458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-13 00:56:26.303472 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:56:26.303485 | orchestrator | 2025-04-13 00:56:26.303498 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-04-13 00:56:26.303510 | orchestrator | Sunday 13 April 2025 00:54:30 +0000 (0:00:01.234) 0:00:10.654 ********** 2025-04-13 00:56:26.303523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-13 00:56:26.303536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-13 00:56:26.303557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-13 00:56:26.303578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-13 00:56:26.303592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-13 00:56:26.303606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-13 00:56:26.303625 | orchestrator | 2025-04-13 00:56:26.303637 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-04-13 00:56:26.303650 | orchestrator | Sunday 13 April 2025 00:54:32 +0000 (0:00:02.525) 0:00:13.180 ********** 2025-04-13 00:56:26.303662 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:56:26.303675 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:56:26.303687 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:56:26.303699 | orchestrator | 2025-04-13 00:56:26.303712 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-04-13 00:56:26.303724 | orchestrator | Sunday 13 April 2025 00:54:36 +0000 (0:00:03.519) 0:00:16.699 ********** 2025-04-13 00:56:26.303736 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:56:26.303748 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:56:26.303760 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:56:26.303772 | orchestrator | 2025-04-13 00:56:26.303785 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-04-13 00:56:26.303797 | orchestrator | Sunday 13 April 2025 00:54:37 +0000 (0:00:01.864) 0:00:18.564 ********** 2025-04-13 00:56:26.303832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/et2025-04-13 00:56:26 | INFO  | Task 83119783-86c6-4abf-a9d3-4ffab5e51d46 is in state SUCCESS 2025-04-13 00:56:26.303849 | orchestrator | c/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-13 00:56:26.303864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-13 00:56:26.303877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-13 00:56:26.303897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-13 00:56:26.303919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-13 00:56:26.303933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-13 00:56:26.303945 | orchestrator | 2025-04-13 00:56:26.303958 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-13 00:56:26.303970 | orchestrator | Sunday 13 April 2025 00:54:40 +0000 (0:00:02.870) 0:00:21.434 ********** 2025-04-13 00:56:26.303983 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:56:26.303995 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:56:26.304008 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:56:26.304026 | orchestrator | 2025-04-13 00:56:26.304038 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-04-13 00:56:26.304051 | orchestrator | Sunday 13 April 2025 00:54:41 +0000 (0:00:00.404) 0:00:21.839 ********** 2025-04-13 00:56:26.304063 | orchestrator | 2025-04-13 00:56:26.304075 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-04-13 00:56:26.304087 | orchestrator | Sunday 13 April 2025 00:54:41 +0000 (0:00:00.353) 0:00:22.192 ********** 2025-04-13 00:56:26.304099 | orchestrator | 2025-04-13 00:56:26.304112 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-04-13 00:56:26.304124 | orchestrator | Sunday 13 April 2025 00:54:41 +0000 (0:00:00.123) 0:00:22.316 ********** 2025-04-13 00:56:26.304136 | orchestrator | 2025-04-13 00:56:26.304231 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-04-13 00:56:26.304255 | orchestrator | Sunday 13 April 2025 00:54:41 +0000 (0:00:00.127) 0:00:22.444 ********** 2025-04-13 00:56:26.304274 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:56:26.304295 | orchestrator | 2025-04-13 00:56:26.304317 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-04-13 00:56:26.304338 | orchestrator | Sunday 13 April 2025 00:54:42 +0000 (0:00:00.428) 0:00:22.872 ********** 2025-04-13 00:56:26.304358 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:56:26.304372 | orchestrator | 2025-04-13 00:56:26.304385 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-04-13 00:56:26.304397 | orchestrator | Sunday 13 April 2025 00:54:42 +0000 (0:00:00.519) 0:00:23.392 ********** 2025-04-13 00:56:26.304409 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:56:26.304421 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:56:26.304434 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:56:26.304446 | orchestrator | 2025-04-13 00:56:26.304458 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-04-13 00:56:26.304470 | orchestrator | Sunday 13 April 2025 00:55:18 +0000 (0:00:35.357) 0:00:58.749 ********** 2025-04-13 00:56:26.304482 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:56:26.304494 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:56:26.304506 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:56:26.304518 | orchestrator | 2025-04-13 00:56:26.304530 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-13 00:56:26.304542 | orchestrator | Sunday 13 April 2025 00:56:13 +0000 (0:00:55.283) 0:01:54.033 ********** 2025-04-13 00:56:26.304554 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:56:26.304566 | orchestrator | 2025-04-13 00:56:26.304578 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-04-13 00:56:26.304591 | orchestrator | Sunday 13 April 2025 00:56:14 +0000 (0:00:00.748) 0:01:54.782 ********** 2025-04-13 00:56:26.304603 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:56:26.304615 | orchestrator | 2025-04-13 00:56:26.304627 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-04-13 00:56:26.304639 | orchestrator | Sunday 13 April 2025 00:56:16 +0000 (0:00:02.630) 0:01:57.413 ********** 2025-04-13 00:56:26.304651 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:56:26.304663 | orchestrator | 2025-04-13 00:56:26.304675 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-04-13 00:56:26.304696 | orchestrator | Sunday 13 April 2025 00:56:19 +0000 (0:00:02.726) 0:02:00.139 ********** 2025-04-13 00:56:26.304708 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:56:26.304720 | orchestrator | 2025-04-13 00:56:26.304732 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-04-13 00:56:26.304749 | orchestrator | Sunday 13 April 2025 00:56:22 +0000 (0:00:03.002) 0:02:03.142 ********** 2025-04-13 00:56:29.339345 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:56:29.339467 | orchestrator | 2025-04-13 00:56:29.339488 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:56:29.339535 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-13 00:56:29.339551 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-13 00:56:29.339565 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-13 00:56:29.339579 | orchestrator | 2025-04-13 00:56:29.339593 | orchestrator | 2025-04-13 00:56:29.339606 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 00:56:29.339620 | orchestrator | Sunday 13 April 2025 00:56:25 +0000 (0:00:03.001) 0:02:06.144 ********** 2025-04-13 00:56:29.339634 | orchestrator | =============================================================================== 2025-04-13 00:56:29.339648 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 55.28s 2025-04-13 00:56:29.339662 | orchestrator | opensearch : Restart opensearch container ------------------------------ 35.36s 2025-04-13 00:56:29.339675 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.52s 2025-04-13 00:56:29.339689 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.36s 2025-04-13 00:56:29.339702 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.00s 2025-04-13 00:56:29.339716 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.00s 2025-04-13 00:56:29.339729 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.87s 2025-04-13 00:56:29.339743 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.73s 2025-04-13 00:56:29.339756 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.63s 2025-04-13 00:56:29.339769 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.53s 2025-04-13 00:56:29.339783 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.86s 2025-04-13 00:56:29.339797 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.60s 2025-04-13 00:56:29.339811 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.23s 2025-04-13 00:56:29.339828 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.23s 2025-04-13 00:56:29.339844 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.78s 2025-04-13 00:56:29.339869 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.75s 2025-04-13 00:56:29.339894 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.75s 2025-04-13 00:56:29.339916 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.70s 2025-04-13 00:56:29.339940 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.60s 2025-04-13 00:56:29.339964 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.52s 2025-04-13 00:56:29.339989 | orchestrator | 2025-04-13 00:56:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:56:29.340015 | orchestrator | 2025-04-13 00:56:26 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:56:29.340038 | orchestrator | 2025-04-13 00:56:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:56:29.340072 | orchestrator | 2025-04-13 00:56:29 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:56:29.342292 | orchestrator | 2025-04-13 00:56:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:56:32.387940 | orchestrator | 2025-04-13 00:56:29 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:56:32.388065 | orchestrator | 2025-04-13 00:56:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:56:32.388141 | orchestrator | 2025-04-13 00:56:32 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:56:32.392595 | orchestrator | 2025-04-13 00:56:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:56:32.392652 | orchestrator | 2025-04-13 00:56:32 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:56:35.448476 | orchestrator | 2025-04-13 00:56:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:56:35.448617 | orchestrator | 2025-04-13 00:56:35 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:56:35.452236 | orchestrator | 2025-04-13 00:56:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:56:38.504312 | orchestrator | 2025-04-13 00:56:35 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:56:38.504420 | orchestrator | 2025-04-13 00:56:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:56:38.504448 | orchestrator | 2025-04-13 00:56:38 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:56:38.506405 | orchestrator | 2025-04-13 00:56:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:56:38.508180 | orchestrator | 2025-04-13 00:56:38 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:56:41.547589 | orchestrator | 2025-04-13 00:56:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:56:41.547723 | orchestrator | 2025-04-13 00:56:41 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:56:41.549275 | orchestrator | 2025-04-13 00:56:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:56:41.550855 | orchestrator | 2025-04-13 00:56:41 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:56:41.551170 | orchestrator | 2025-04-13 00:56:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:56:44.601983 | orchestrator | 2025-04-13 00:56:44 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:56:44.603923 | orchestrator | 2025-04-13 00:56:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:56:44.605663 | orchestrator | 2025-04-13 00:56:44 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:56:44.605874 | orchestrator | 2025-04-13 00:56:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:56:47.648960 | orchestrator | 2025-04-13 00:56:47 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:56:47.650649 | orchestrator | 2025-04-13 00:56:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:56:47.652385 | orchestrator | 2025-04-13 00:56:47 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:56:50.699878 | orchestrator | 2025-04-13 00:56:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:56:50.700020 | orchestrator | 2025-04-13 00:56:50 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:56:50.701325 | orchestrator | 2025-04-13 00:56:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:56:50.704670 | orchestrator | 2025-04-13 00:56:50 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:56:53.750997 | orchestrator | 2025-04-13 00:56:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:56:53.751142 | orchestrator | 2025-04-13 00:56:53 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:56:53.752242 | orchestrator | 2025-04-13 00:56:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:56:53.754257 | orchestrator | 2025-04-13 00:56:53 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:56:56.806299 | orchestrator | 2025-04-13 00:56:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:56:56.806447 | orchestrator | 2025-04-13 00:56:56 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:56:56.808451 | orchestrator | 2025-04-13 00:56:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:56:56.810316 | orchestrator | 2025-04-13 00:56:56 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:56:56.810651 | orchestrator | 2025-04-13 00:56:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:56:59.855362 | orchestrator | 2025-04-13 00:56:59 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:56:59.856999 | orchestrator | 2025-04-13 00:56:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:56:59.858588 | orchestrator | 2025-04-13 00:56:59 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:57:02.908662 | orchestrator | 2025-04-13 00:56:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:57:02.908805 | orchestrator | 2025-04-13 00:57:02 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:57:02.910725 | orchestrator | 2025-04-13 00:57:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:57:02.912912 | orchestrator | 2025-04-13 00:57:02 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:57:05.966121 | orchestrator | 2025-04-13 00:57:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:57:05.966343 | orchestrator | 2025-04-13 00:57:05 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:57:05.966900 | orchestrator | 2025-04-13 00:57:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:57:05.966939 | orchestrator | 2025-04-13 00:57:05 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:57:09.023609 | orchestrator | 2025-04-13 00:57:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:57:09.023743 | orchestrator | 2025-04-13 00:57:09 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:57:09.024986 | orchestrator | 2025-04-13 00:57:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:57:09.026571 | orchestrator | 2025-04-13 00:57:09 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:57:09.026917 | orchestrator | 2025-04-13 00:57:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:57:12.074217 | orchestrator | 2025-04-13 00:57:12 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:57:12.076119 | orchestrator | 2025-04-13 00:57:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:57:12.076650 | orchestrator | 2025-04-13 00:57:12 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:57:12.076861 | orchestrator | 2025-04-13 00:57:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:57:15.131533 | orchestrator | 2025-04-13 00:57:15 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:57:15.134426 | orchestrator | 2025-04-13 00:57:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:57:15.136383 | orchestrator | 2025-04-13 00:57:15 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:57:18.191027 | orchestrator | 2025-04-13 00:57:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:57:18.191136 | orchestrator | 2025-04-13 00:57:18 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:57:18.193579 | orchestrator | 2025-04-13 00:57:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:57:18.195272 | orchestrator | 2025-04-13 00:57:18 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:57:21.244453 | orchestrator | 2025-04-13 00:57:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:57:21.244598 | orchestrator | 2025-04-13 00:57:21 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:57:21.245776 | orchestrator | 2025-04-13 00:57:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:57:21.247625 | orchestrator | 2025-04-13 00:57:21 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:57:24.298722 | orchestrator | 2025-04-13 00:57:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:57:24.298877 | orchestrator | 2025-04-13 00:57:24 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:57:24.299646 | orchestrator | 2025-04-13 00:57:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:57:24.301673 | orchestrator | 2025-04-13 00:57:24 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state STARTED 2025-04-13 00:57:27.359575 | orchestrator | 2025-04-13 00:57:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:57:27.359712 | orchestrator | 2025-04-13 00:57:27 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:57:27.360749 | orchestrator | 2025-04-13 00:57:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:57:27.366911 | orchestrator | 2025-04-13 00:57:27 | INFO  | Task 5c95b36a-37ce-459a-9a36-1d0778c39b99 is in state SUCCESS 2025-04-13 00:57:27.369511 | orchestrator | 2025-04-13 00:57:27.369569 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-13 00:57:27.369586 | orchestrator | 2025-04-13 00:57:27.369601 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-04-13 00:57:27.369616 | orchestrator | 2025-04-13 00:57:27.369630 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-04-13 00:57:27.369644 | orchestrator | Sunday 13 April 2025 00:44:27 +0000 (0:00:01.811) 0:00:01.811 ********** 2025-04-13 00:57:27.369659 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.369675 | orchestrator | 2025-04-13 00:57:27.369689 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-04-13 00:57:27.369718 | orchestrator | Sunday 13 April 2025 00:44:28 +0000 (0:00:01.240) 0:00:03.051 ********** 2025-04-13 00:57:27.369733 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-04-13 00:57:27.369748 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-04-13 00:57:27.369857 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-04-13 00:57:27.369876 | orchestrator | 2025-04-13 00:57:27.370271 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-04-13 00:57:27.370289 | orchestrator | Sunday 13 April 2025 00:44:29 +0000 (0:00:00.551) 0:00:03.602 ********** 2025-04-13 00:57:27.370327 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.370344 | orchestrator | 2025-04-13 00:57:27.370358 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-04-13 00:57:27.370372 | orchestrator | Sunday 13 April 2025 00:44:30 +0000 (0:00:01.261) 0:00:04.864 ********** 2025-04-13 00:57:27.370386 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.370480 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.370502 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.370516 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.370530 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.370543 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.370557 | orchestrator | 2025-04-13 00:57:27.370571 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-04-13 00:57:27.370585 | orchestrator | Sunday 13 April 2025 00:44:31 +0000 (0:00:01.389) 0:00:06.253 ********** 2025-04-13 00:57:27.370598 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.370612 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.370625 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.370639 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.370653 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.370666 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.370680 | orchestrator | 2025-04-13 00:57:27.370694 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-04-13 00:57:27.370707 | orchestrator | Sunday 13 April 2025 00:44:32 +0000 (0:00:00.899) 0:00:07.153 ********** 2025-04-13 00:57:27.370817 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.370835 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.370849 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.370863 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.370879 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.370895 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.370911 | orchestrator | 2025-04-13 00:57:27.370926 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-04-13 00:57:27.370941 | orchestrator | Sunday 13 April 2025 00:44:34 +0000 (0:00:01.394) 0:00:08.547 ********** 2025-04-13 00:57:27.370957 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.370972 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.370997 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.371026 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.371043 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.374235 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.374287 | orchestrator | 2025-04-13 00:57:27.374305 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-04-13 00:57:27.374320 | orchestrator | Sunday 13 April 2025 00:44:35 +0000 (0:00:00.943) 0:00:09.491 ********** 2025-04-13 00:57:27.374348 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.374363 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.374377 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.374390 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.374404 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.374418 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.374432 | orchestrator | 2025-04-13 00:57:27.374446 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-04-13 00:57:27.374460 | orchestrator | Sunday 13 April 2025 00:44:36 +0000 (0:00:00.938) 0:00:10.429 ********** 2025-04-13 00:57:27.374473 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.374487 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.374500 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.374514 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.374528 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.374541 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.374555 | orchestrator | 2025-04-13 00:57:27.374569 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-04-13 00:57:27.374613 | orchestrator | Sunday 13 April 2025 00:44:37 +0000 (0:00:01.178) 0:00:11.607 ********** 2025-04-13 00:57:27.374628 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.374642 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.374656 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.374670 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.374684 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.374697 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.374711 | orchestrator | 2025-04-13 00:57:27.374725 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-04-13 00:57:27.374739 | orchestrator | Sunday 13 April 2025 00:44:38 +0000 (0:00:00.978) 0:00:12.586 ********** 2025-04-13 00:57:27.374753 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.374766 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.374780 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.374794 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.374808 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.374822 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.374835 | orchestrator | 2025-04-13 00:57:27.374868 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-04-13 00:57:27.374883 | orchestrator | Sunday 13 April 2025 00:44:39 +0000 (0:00:01.071) 0:00:13.658 ********** 2025-04-13 00:57:27.374897 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-13 00:57:27.374911 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-13 00:57:27.374925 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-13 00:57:27.374939 | orchestrator | 2025-04-13 00:57:27.374953 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-04-13 00:57:27.374967 | orchestrator | Sunday 13 April 2025 00:44:40 +0000 (0:00:00.702) 0:00:14.360 ********** 2025-04-13 00:57:27.374981 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.374996 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.375010 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.375023 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.375037 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.375051 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.375064 | orchestrator | 2025-04-13 00:57:27.375078 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-04-13 00:57:27.375092 | orchestrator | Sunday 13 April 2025 00:44:41 +0000 (0:00:01.626) 0:00:15.986 ********** 2025-04-13 00:57:27.375106 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-04-13 00:57:27.375120 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-13 00:57:27.375134 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-13 00:57:27.375176 | orchestrator | 2025-04-13 00:57:27.375191 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-04-13 00:57:27.375205 | orchestrator | Sunday 13 April 2025 00:44:44 +0000 (0:00:03.144) 0:00:19.131 ********** 2025-04-13 00:57:27.375219 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-13 00:57:27.375233 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-13 00:57:27.375247 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-13 00:57:27.375261 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.375275 | orchestrator | 2025-04-13 00:57:27.375289 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-04-13 00:57:27.375317 | orchestrator | Sunday 13 April 2025 00:44:45 +0000 (0:00:00.590) 0:00:19.721 ********** 2025-04-13 00:57:27.375333 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-04-13 00:57:27.375390 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-04-13 00:57:27.375415 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-04-13 00:57:27.375430 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.375444 | orchestrator | 2025-04-13 00:57:27.375458 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-04-13 00:57:27.375472 | orchestrator | Sunday 13 April 2025 00:44:46 +0000 (0:00:00.775) 0:00:20.497 ********** 2025-04-13 00:57:27.375487 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-13 00:57:27.375508 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-13 00:57:27.375522 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-13 00:57:27.375536 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.375550 | orchestrator | 2025-04-13 00:57:27.375564 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-04-13 00:57:27.375585 | orchestrator | Sunday 13 April 2025 00:44:46 +0000 (0:00:00.232) 0:00:20.730 ********** 2025-04-13 00:57:27.375604 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-04-13 00:44:42.555703', 'end': '2025-04-13 00:44:42.830661', 'delta': '0:00:00.274958', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-04-13 00:57:27.375623 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-04-13 00:44:43.577008', 'end': '2025-04-13 00:44:43.853772', 'delta': '0:00:00.276764', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-04-13 00:57:27.375638 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-04-13 00:44:44.393633', 'end': '2025-04-13 00:44:44.656554', 'delta': '0:00:00.262921', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-04-13 00:57:27.375659 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.375674 | orchestrator | 2025-04-13 00:57:27.375688 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-04-13 00:57:27.375702 | orchestrator | Sunday 13 April 2025 00:44:46 +0000 (0:00:00.291) 0:00:21.021 ********** 2025-04-13 00:57:27.375716 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.375730 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.375744 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.375757 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.375771 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.375785 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.375798 | orchestrator | 2025-04-13 00:57:27.375812 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-04-13 00:57:27.375826 | orchestrator | Sunday 13 April 2025 00:44:48 +0000 (0:00:01.741) 0:00:22.763 ********** 2025-04-13 00:57:27.375840 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.375853 | orchestrator | 2025-04-13 00:57:27.375867 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-04-13 00:57:27.375881 | orchestrator | Sunday 13 April 2025 00:44:49 +0000 (0:00:00.645) 0:00:23.409 ********** 2025-04-13 00:57:27.375895 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.375908 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.375922 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.375936 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.375950 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.375997 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.376018 | orchestrator | 2025-04-13 00:57:27.376032 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-04-13 00:57:27.376046 | orchestrator | Sunday 13 April 2025 00:44:49 +0000 (0:00:00.627) 0:00:24.036 ********** 2025-04-13 00:57:27.376060 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.376073 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.376087 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.376101 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.376115 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.376129 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.376173 | orchestrator | 2025-04-13 00:57:27.376189 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-13 00:57:27.376203 | orchestrator | Sunday 13 April 2025 00:44:50 +0000 (0:00:01.244) 0:00:25.280 ********** 2025-04-13 00:57:27.376217 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.376231 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.376244 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.376258 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.376271 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.376285 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.376299 | orchestrator | 2025-04-13 00:57:27.376313 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-04-13 00:57:27.376326 | orchestrator | Sunday 13 April 2025 00:44:51 +0000 (0:00:00.938) 0:00:26.219 ********** 2025-04-13 00:57:27.376347 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.376362 | orchestrator | 2025-04-13 00:57:27.376376 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-04-13 00:57:27.376390 | orchestrator | Sunday 13 April 2025 00:44:52 +0000 (0:00:00.384) 0:00:26.604 ********** 2025-04-13 00:57:27.376411 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.376425 | orchestrator | 2025-04-13 00:57:27.376439 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-13 00:57:27.376453 | orchestrator | Sunday 13 April 2025 00:44:52 +0000 (0:00:00.288) 0:00:26.892 ********** 2025-04-13 00:57:27.376467 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.376480 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.376494 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.376508 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.376521 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.376535 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.376549 | orchestrator | 2025-04-13 00:57:27.376563 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-04-13 00:57:27.376577 | orchestrator | Sunday 13 April 2025 00:44:53 +0000 (0:00:00.961) 0:00:27.853 ********** 2025-04-13 00:57:27.376590 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.376604 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.376618 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.376632 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.376645 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.376659 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.376673 | orchestrator | 2025-04-13 00:57:27.376686 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-04-13 00:57:27.376700 | orchestrator | Sunday 13 April 2025 00:44:54 +0000 (0:00:01.341) 0:00:29.194 ********** 2025-04-13 00:57:27.376714 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.376728 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.376741 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.376755 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.376769 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.376782 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.376796 | orchestrator | 2025-04-13 00:57:27.376810 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-04-13 00:57:27.376824 | orchestrator | Sunday 13 April 2025 00:44:55 +0000 (0:00:00.655) 0:00:29.850 ********** 2025-04-13 00:57:27.376838 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.376851 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.376865 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.376879 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.376893 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.376907 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.376920 | orchestrator | 2025-04-13 00:57:27.376934 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-04-13 00:57:27.376948 | orchestrator | Sunday 13 April 2025 00:44:56 +0000 (0:00:01.027) 0:00:30.878 ********** 2025-04-13 00:57:27.376962 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.376976 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.376989 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.377003 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.377017 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.377031 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.377044 | orchestrator | 2025-04-13 00:57:27.377058 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-04-13 00:57:27.377072 | orchestrator | Sunday 13 April 2025 00:44:57 +0000 (0:00:00.724) 0:00:31.602 ********** 2025-04-13 00:57:27.377086 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.377099 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.377113 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.377127 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.377194 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.377210 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.377224 | orchestrator | 2025-04-13 00:57:27.377251 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-04-13 00:57:27.377326 | orchestrator | Sunday 13 April 2025 00:44:58 +0000 (0:00:00.984) 0:00:32.586 ********** 2025-04-13 00:57:27.377341 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.377356 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.377379 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.377394 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.377408 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.377422 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.377436 | orchestrator | 2025-04-13 00:57:27.377450 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-04-13 00:57:27.377464 | orchestrator | Sunday 13 April 2025 00:44:59 +0000 (0:00:00.782) 0:00:33.369 ********** 2025-04-13 00:57:27.377479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.377494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.377552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.377569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.377589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.377604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.377618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.377632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.377672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe299df1-123f-45eb-a46f-1bc77e9ea0d1', 'scsi-SQEMU_QEMU_HARDDISK_fe299df1-123f-45eb-a46f-1bc77e9ea0d1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe299df1-123f-45eb-a46f-1bc77e9ea0d1-part1', 'scsi-SQEMU_QEMU_HARDDISK_fe299df1-123f-45eb-a46f-1bc77e9ea0d1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe299df1-123f-45eb-a46f-1bc77e9ea0d1-part14', 'scsi-SQEMU_QEMU_HARDDISK_fe299df1-123f-45eb-a46f-1bc77e9ea0d1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe299df1-123f-45eb-a46f-1bc77e9ea0d1-part15', 'scsi-SQEMU_QEMU_HARDDISK_fe299df1-123f-45eb-a46f-1bc77e9ea0d1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe299df1-123f-45eb-a46f-1bc77e9ea0d1-part16', 'scsi-SQEMU_QEMU_HARDDISK_fe299df1-123f-45eb-a46f-1bc77e9ea0d1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.377691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5c76205-09bb-4a16-ab8f-39ffb03c9143', 'scsi-SQEMU_QEMU_HARDDISK_f5c76205-09bb-4a16-ab8f-39ffb03c9143'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.377707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.377721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.377737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95b24700-cfbe-4d9d-a7ca-ca6e4d2b6d43', 'scsi-SQEMU_QEMU_HARDDISK_95b24700-cfbe-4d9d-a7ca-ca6e4d2b6d43'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.377758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.377773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b430468-eb80-4fc4-b9b2-ed2873d86014', 'scsi-SQEMU_QEMU_HARDDISK_9b430468-eb80-4fc4-b9b2-ed2873d86014'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.377788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.377808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.377824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-13-00-02-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.377846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.377860 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.377875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.377896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.377923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_784fd8b6-165f-4d54-8bd6-d3b5fe38df06', 'scsi-SQEMU_QEMU_HARDDISK_784fd8b6-165f-4d54-8bd6-d3b5fe38df06'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_784fd8b6-165f-4d54-8bd6-d3b5fe38df06-part1', 'scsi-SQEMU_QEMU_HARDDISK_784fd8b6-165f-4d54-8bd6-d3b5fe38df06-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_784fd8b6-165f-4d54-8bd6-d3b5fe38df06-part14', 'scsi-SQEMU_QEMU_HARDDISK_784fd8b6-165f-4d54-8bd6-d3b5fe38df06-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_784fd8b6-165f-4d54-8bd6-d3b5fe38df06-part15', 'scsi-SQEMU_QEMU_HARDDISK_784fd8b6-165f-4d54-8bd6-d3b5fe38df06-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_784fd8b6-165f-4d54-8bd6-d3b5fe38df06-part16', 'scsi-SQEMU_QEMU_HARDDISK_784fd8b6-165f-4d54-8bd6-d3b5fe38df06-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.377940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddf16837-33ca-409f-b739-a4d4760cfc5d', 'scsi-SQEMU_QEMU_HARDDISK_ddf16837-33ca-409f-b739-a4d4760cfc5d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.377955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0bf36b3b-f07e-4ca4-96cb-185377001260', 'scsi-SQEMU_QEMU_HARDDISK_0bf36b3b-f07e-4ca4-96cb-185377001260'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.377970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71ac43d1-dda3-4017-bb0e-4637e963cb04', 'scsi-SQEMU_QEMU_HARDDISK_71ac43d1-dda3-4017-bb0e-4637e963cb04'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.377992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-13-00-02-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.378007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378233 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.378248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c70cca57-340b-42ee-85c2-b3ee41d2b128', 'scsi-SQEMU_QEMU_HARDDISK_c70cca57-340b-42ee-85c2-b3ee41d2b128'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c70cca57-340b-42ee-85c2-b3ee41d2b128-part1', 'scsi-SQEMU_QEMU_HARDDISK_c70cca57-340b-42ee-85c2-b3ee41d2b128-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c70cca57-340b-42ee-85c2-b3ee41d2b128-part14', 'scsi-SQEMU_QEMU_HARDDISK_c70cca57-340b-42ee-85c2-b3ee41d2b128-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c70cca57-340b-42ee-85c2-b3ee41d2b128-part15', 'scsi-SQEMU_QEMU_HARDDISK_c70cca57-340b-42ee-85c2-b3ee41d2b128-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c70cca57-340b-42ee-85c2-b3ee41d2b128-part16', 'scsi-SQEMU_QEMU_HARDDISK_c70cca57-340b-42ee-85c2-b3ee41d2b128-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.378276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_beb6d58d-9f9a-40a9-9a80-602a3ce24890', 'scsi-SQEMU_QEMU_HARDDISK_beb6d58d-9f9a-40a9-9a80-602a3ce24890'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.378293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f7620518-2044-4595-90df-c620cad18d8d', 'scsi-SQEMU_QEMU_HARDDISK_f7620518-2044-4595-90df-c620cad18d8d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.378307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7742d708-e0a6-4322-a2de-81c274934e05', 'scsi-SQEMU_QEMU_HARDDISK_7742d708-e0a6-4322-a2de-81c274934e05'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.378331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-13-00-02-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.378346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2045bad1--ab77--5a33--981a--e42fb4136085-osd--block--2045bad1--ab77--5a33--981a--e42fb4136085', 'dm-uuid-LVM-9ClZghmJtxOPX1O0zOX2WtCXvawwZfDy7wBl25fdepsNrLXd7sjUWlLK1N9BRuwM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--075038e7--2b9c--5de1--9fc0--4ab80f908b26-osd--block--075038e7--2b9c--5de1--9fc0--4ab80f908b26', 'dm-uuid-LVM-ijdtEhTChvVxavxMfY9fKDsMZwQKU6xtDJWHGcfUiA0AHJDZ056L3ZFkBJcDFDjX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378376 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378397 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378484 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378499 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.378513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099', 'scsi-SQEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099-part1', 'scsi-SQEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099-part14', 'scsi-SQEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099-part15', 'scsi-SQEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099-part16', 'scsi-SQEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.378564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2045bad1--ab77--5a33--981a--e42fb4136085-osd--block--2045bad1--ab77--5a33--981a--e42fb4136085'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Wg7Rcb-fdKY-KXS7-TPfC-U0vO-eHnO-jchBgv', 'scsi-0QEMU_QEMU_HARDDISK_d62d4166-25a1-4741-94fc-59c78379b097', 'scsi-SQEMU_QEMU_HARDDISK_d62d4166-25a1-4741-94fc-59c78379b097'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.378581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--075038e7--2b9c--5de1--9fc0--4ab80f908b26-osd--block--075038e7--2b9c--5de1--9fc0--4ab80f908b26'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z8bkSt-YrWX-zbEK-9ciE-YDhx-WB78-xQG7ZG', 'scsi-0QEMU_QEMU_HARDDISK_24d70fc8-7961-4caf-9f39-267d5072f1bc', 'scsi-SQEMU_QEMU_HARDDISK_24d70fc8-7961-4caf-9f39-267d5072f1bc'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.378596 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3f4097-e1b2-4e0f-b572-2003c7cd8b15', 'scsi-SQEMU_QEMU_HARDDISK_bd3f4097-e1b2-4e0f-b572-2003c7cd8b15'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.378611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-13-00-02-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.378633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a50ad019--9a42--5399--96dd--0ec75fe99929-osd--block--a50ad019--9a42--5399--96dd--0ec75fe99929', 'dm-uuid-LVM-0MGJ4no5hg7d09lOzjNoAU8ORU59dmPsJyAr8ZQr8cP5sKdIDED1qCrvMRfzesIu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c1aa12de--f4f1--5fa1--83b9--2c9c84fd1e23-osd--block--c1aa12de--f4f1--5fa1--83b9--2c9c84fd1e23', 'dm-uuid-LVM-fcrvKvvG1tbWkSLXlca50ispeFKUupGEQEdmdc0FRNe91iBPAGIWkZVduBCSKi30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378670 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378685 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378742 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378789 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378810 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7', 'scsi-SQEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7-part1', 'scsi-SQEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7-part14', 'scsi-SQEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7-part15', 'scsi-SQEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7-part16', 'scsi-SQEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.378832 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a50ad019--9a42--5399--96dd--0ec75fe99929-osd--block--a50ad019--9a42--5399--96dd--0ec75fe99929'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-exD8So-0SKp-0Ku2-66L3-4IzZ-cVpj-7Vw8bQ', 'scsi-0QEMU_QEMU_HARDDISK_a0e179ac-f513-4bce-8698-5c5d77bb97a6', 'scsi-SQEMU_QEMU_HARDDISK_a0e179ac-f513-4bce-8698-5c5d77bb97a6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.378847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c1aa12de--f4f1--5fa1--83b9--2c9c84fd1e23-osd--block--c1aa12de--f4f1--5fa1--83b9--2c9c84fd1e23'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sVOc6A-lmOP-2cez-e17H-BIO7-pUke-8KbMpp', 'scsi-0QEMU_QEMU_HARDDISK_aad8aa45-f541-429b-bfb0-28cd3fbd229c', 'scsi-SQEMU_QEMU_HARDDISK_aad8aa45-f541-429b-bfb0-28cd3fbd229c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.378868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea334510-65a0-4c82-ab7f-212ffba0ceeb', 'scsi-SQEMU_QEMU_HARDDISK_ea334510-65a0-4c82-ab7f-212ffba0ceeb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.378890 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.378909 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-13-00-02-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.378924 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.378939 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c75c5404--ac9a--5ffa--97a7--d9feeb5e7a2a-osd--block--c75c5404--ac9a--5ffa--97a7--d9feeb5e7a2a', 'dm-uuid-LVM-YxZVTg6p9WxxiVJ4KPLhGhHhq40mwRUjoroj3FrCb42cpkulySnmKq0DGWrlWzuP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cc16a9be--1c89--5ed3--8c34--f79b9c168598-osd--block--cc16a9be--1c89--5ed3--8c34--f79b9c168598', 'dm-uuid-LVM-3EJewZS2mDPacCqo8O8bhWXwBTUvAhMqy1z6rgd9gwK4f900LkMeiV7yeuPbN5zE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378968 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.378996 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.379011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.379031 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.379051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.379066 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.379080 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:57:27.379095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8', 'scsi-SQEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.379122 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c75c5404--ac9a--5ffa--97a7--d9feeb5e7a2a-osd--block--c75c5404--ac9a--5ffa--97a7--d9feeb5e7a2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YqhesG-X622-ppI0-oBRQ-6rJ0-L1CB-dky6fD', 'scsi-0QEMU_QEMU_HARDDISK_15f38305-5d3a-4a2a-94a9-ec4f360f12f0', 'scsi-SQEMU_QEMU_HARDDISK_15f38305-5d3a-4a2a-94a9-ec4f360f12f0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.379160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--cc16a9be--1c89--5ed3--8c34--f79b9c168598-osd--block--cc16a9be--1c89--5ed3--8c34--f79b9c168598'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IpkCNw-rEwG-L006-2kPo-Gqut-ZuOO-dqDdm9', 'scsi-0QEMU_QEMU_HARDDISK_466f66ff-268f-471d-abe8-9f0f353ab0cc', 'scsi-SQEMU_QEMU_HARDDISK_466f66ff-268f-471d-abe8-9f0f353ab0cc'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.379176 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d771f52a-9ada-4427-8de2-0003eafe1256', 'scsi-SQEMU_QEMU_HARDDISK_d771f52a-9ada-4427-8de2-0003eafe1256'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.379191 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-13-00-02-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:57:27.379206 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.379220 | orchestrator | 2025-04-13 00:57:27.379234 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-04-13 00:57:27.379248 | orchestrator | Sunday 13 April 2025 00:45:00 +0000 (0:00:01.755) 0:00:35.125 ********** 2025-04-13 00:57:27.379262 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.379276 | orchestrator | 2025-04-13 00:57:27.379290 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-04-13 00:57:27.379304 | orchestrator | Sunday 13 April 2025 00:45:01 +0000 (0:00:00.310) 0:00:35.436 ********** 2025-04-13 00:57:27.379318 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.379332 | orchestrator | 2025-04-13 00:57:27.379346 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-04-13 00:57:27.379360 | orchestrator | Sunday 13 April 2025 00:45:01 +0000 (0:00:00.160) 0:00:35.597 ********** 2025-04-13 00:57:27.379374 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.379388 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.379401 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.379415 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.379429 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.379443 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.379457 | orchestrator | 2025-04-13 00:57:27.379470 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-04-13 00:57:27.379484 | orchestrator | Sunday 13 April 2025 00:45:02 +0000 (0:00:00.912) 0:00:36.509 ********** 2025-04-13 00:57:27.379498 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.379518 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.379533 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.379547 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.379560 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.379574 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.379588 | orchestrator | 2025-04-13 00:57:27.379602 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-04-13 00:57:27.379616 | orchestrator | Sunday 13 April 2025 00:45:03 +0000 (0:00:01.500) 0:00:38.010 ********** 2025-04-13 00:57:27.379629 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.379643 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.379657 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.379670 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.379684 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.379698 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.379712 | orchestrator | 2025-04-13 00:57:27.379726 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-13 00:57:27.379740 | orchestrator | Sunday 13 April 2025 00:45:04 +0000 (0:00:00.866) 0:00:38.876 ********** 2025-04-13 00:57:27.379753 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.379767 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.379781 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.379795 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.379809 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.379828 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.379842 | orchestrator | 2025-04-13 00:57:27.379856 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-13 00:57:27.379870 | orchestrator | Sunday 13 April 2025 00:45:05 +0000 (0:00:01.164) 0:00:40.041 ********** 2025-04-13 00:57:27.379884 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.379898 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.379912 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.379926 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.379940 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.379954 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.379968 | orchestrator | 2025-04-13 00:57:27.379982 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-13 00:57:27.379995 | orchestrator | Sunday 13 April 2025 00:45:06 +0000 (0:00:00.995) 0:00:41.036 ********** 2025-04-13 00:57:27.380009 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.380023 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.380036 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.380051 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.380064 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.380078 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.380092 | orchestrator | 2025-04-13 00:57:27.380106 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-13 00:57:27.380120 | orchestrator | Sunday 13 April 2025 00:45:08 +0000 (0:00:01.736) 0:00:42.773 ********** 2025-04-13 00:57:27.380134 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.380204 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.380225 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.380239 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.380252 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.380266 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.380280 | orchestrator | 2025-04-13 00:57:27.380294 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-04-13 00:57:27.380308 | orchestrator | Sunday 13 April 2025 00:45:09 +0000 (0:00:01.263) 0:00:44.037 ********** 2025-04-13 00:57:27.380322 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-13 00:57:27.380336 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-13 00:57:27.380350 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-13 00:57:27.380364 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-13 00:57:27.380388 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-13 00:57:27.380402 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-13 00:57:27.380416 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.380430 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-13 00:57:27.380444 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.380458 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-13 00:57:27.380476 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-13 00:57:27.380489 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-13 00:57:27.380501 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.380513 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-13 00:57:27.380526 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-13 00:57:27.380538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-13 00:57:27.380550 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-13 00:57:27.380562 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-13 00:57:27.380574 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-13 00:57:27.380586 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.380598 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-13 00:57:27.380610 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.380623 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-13 00:57:27.380635 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.380647 | orchestrator | 2025-04-13 00:57:27.380659 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-04-13 00:57:27.380671 | orchestrator | Sunday 13 April 2025 00:45:12 +0000 (0:00:02.747) 0:00:46.784 ********** 2025-04-13 00:57:27.380684 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-13 00:57:27.380696 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-13 00:57:27.380708 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-13 00:57:27.380720 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-13 00:57:27.380733 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-13 00:57:27.380745 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-13 00:57:27.380757 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.380770 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-13 00:57:27.380782 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-13 00:57:27.380794 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-13 00:57:27.380806 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-13 00:57:27.380818 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.380830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-13 00:57:27.380842 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-13 00:57:27.380855 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.380867 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-13 00:57:27.380879 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.380891 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-13 00:57:27.380903 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-13 00:57:27.380916 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-13 00:57:27.380934 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-13 00:57:27.380946 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.380959 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-13 00:57:27.380971 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.380989 | orchestrator | 2025-04-13 00:57:27.381001 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-04-13 00:57:27.381014 | orchestrator | Sunday 13 April 2025 00:45:14 +0000 (0:00:02.388) 0:00:49.172 ********** 2025-04-13 00:57:27.381026 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-13 00:57:27.381038 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-04-13 00:57:27.381051 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-04-13 00:57:27.381063 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-04-13 00:57:27.381075 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-04-13 00:57:27.381087 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-04-13 00:57:27.381100 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-04-13 00:57:27.381112 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-04-13 00:57:27.381124 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-04-13 00:57:27.381136 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-04-13 00:57:27.381163 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-04-13 00:57:27.381176 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-04-13 00:57:27.381187 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-04-13 00:57:27.381199 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-04-13 00:57:27.381212 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-04-13 00:57:27.381224 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-04-13 00:57:27.381236 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-04-13 00:57:27.381248 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-04-13 00:57:27.381260 | orchestrator | 2025-04-13 00:57:27.381272 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-04-13 00:57:27.381284 | orchestrator | Sunday 13 April 2025 00:45:19 +0000 (0:00:05.103) 0:00:54.275 ********** 2025-04-13 00:57:27.381297 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-13 00:57:27.381309 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-13 00:57:27.381321 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-13 00:57:27.381333 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-13 00:57:27.381345 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-13 00:57:27.381357 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.381369 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-13 00:57:27.381381 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-13 00:57:27.381393 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-13 00:57:27.381406 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.381418 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-13 00:57:27.381435 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-13 00:57:27.381448 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.381460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-13 00:57:27.381472 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-13 00:57:27.381484 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-13 00:57:27.381496 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-13 00:57:27.381508 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-13 00:57:27.381521 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.381533 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.381545 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-13 00:57:27.381557 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-13 00:57:27.381569 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-13 00:57:27.381581 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.381599 | orchestrator | 2025-04-13 00:57:27.381612 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-04-13 00:57:27.381624 | orchestrator | Sunday 13 April 2025 00:45:21 +0000 (0:00:01.541) 0:00:55.817 ********** 2025-04-13 00:57:27.381636 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-13 00:57:27.381653 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-13 00:57:27.381666 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-13 00:57:27.381678 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-13 00:57:27.381690 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-13 00:57:27.381702 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.381715 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-13 00:57:27.381727 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-13 00:57:27.381739 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-13 00:57:27.381751 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-13 00:57:27.381763 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.381776 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-13 00:57:27.381788 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-13 00:57:27.381800 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.381813 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-13 00:57:27.381825 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-13 00:57:27.381842 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-13 00:57:27.381855 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-13 00:57:27.381867 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.381880 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.381892 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-13 00:57:27.381904 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-13 00:57:27.381916 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-13 00:57:27.381928 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.381940 | orchestrator | 2025-04-13 00:57:27.381952 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-04-13 00:57:27.381965 | orchestrator | Sunday 13 April 2025 00:45:22 +0000 (0:00:01.235) 0:00:57.052 ********** 2025-04-13 00:57:27.381977 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-04-13 00:57:27.381989 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-13 00:57:27.382002 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-13 00:57:27.382058 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-13 00:57:27.382074 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-04-13 00:57:27.382088 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-13 00:57:27.382101 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-13 00:57:27.382113 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-13 00:57:27.382125 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-04-13 00:57:27.382152 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-13 00:57:27.382165 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-13 00:57:27.382177 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-13 00:57:27.382197 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.382210 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-13 00:57:27.382222 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-13 00:57:27.382235 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-13 00:57:27.382247 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.382259 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-13 00:57:27.382272 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-13 00:57:27.382284 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-13 00:57:27.382296 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.382309 | orchestrator | 2025-04-13 00:57:27.382321 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-04-13 00:57:27.382334 | orchestrator | Sunday 13 April 2025 00:45:24 +0000 (0:00:01.304) 0:00:58.357 ********** 2025-04-13 00:57:27.382346 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.382358 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.382371 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.382383 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.382395 | orchestrator | 2025-04-13 00:57:27.382408 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-13 00:57:27.382420 | orchestrator | Sunday 13 April 2025 00:45:25 +0000 (0:00:01.139) 0:00:59.496 ********** 2025-04-13 00:57:27.382432 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.382445 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.382457 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.382469 | orchestrator | 2025-04-13 00:57:27.382481 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-13 00:57:27.382493 | orchestrator | Sunday 13 April 2025 00:45:25 +0000 (0:00:00.580) 0:01:00.077 ********** 2025-04-13 00:57:27.382506 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.382518 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.382530 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.382542 | orchestrator | 2025-04-13 00:57:27.382555 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-13 00:57:27.382567 | orchestrator | Sunday 13 April 2025 00:45:26 +0000 (0:00:00.859) 0:01:00.936 ********** 2025-04-13 00:57:27.382579 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.382591 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.382603 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.382616 | orchestrator | 2025-04-13 00:57:27.382628 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-13 00:57:27.382640 | orchestrator | Sunday 13 April 2025 00:45:27 +0000 (0:00:00.570) 0:01:01.507 ********** 2025-04-13 00:57:27.382653 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.382665 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.382677 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.382690 | orchestrator | 2025-04-13 00:57:27.382702 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-13 00:57:27.382727 | orchestrator | Sunday 13 April 2025 00:45:28 +0000 (0:00:00.816) 0:01:02.324 ********** 2025-04-13 00:57:27.382741 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.382753 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.382766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.382778 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.382796 | orchestrator | 2025-04-13 00:57:27.382809 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-13 00:57:27.382821 | orchestrator | Sunday 13 April 2025 00:45:28 +0000 (0:00:00.538) 0:01:02.862 ********** 2025-04-13 00:57:27.382833 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.382846 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.382858 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.382870 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.382882 | orchestrator | 2025-04-13 00:57:27.382895 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-13 00:57:27.382907 | orchestrator | Sunday 13 April 2025 00:45:29 +0000 (0:00:00.590) 0:01:03.453 ********** 2025-04-13 00:57:27.382919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.382931 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.382943 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.382956 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.382973 | orchestrator | 2025-04-13 00:57:27.382986 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-13 00:57:27.382998 | orchestrator | Sunday 13 April 2025 00:45:30 +0000 (0:00:01.298) 0:01:04.751 ********** 2025-04-13 00:57:27.383011 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.383023 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.383040 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.383053 | orchestrator | 2025-04-13 00:57:27.383065 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-13 00:57:27.383077 | orchestrator | Sunday 13 April 2025 00:45:31 +0000 (0:00:00.798) 0:01:05.550 ********** 2025-04-13 00:57:27.383089 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-13 00:57:27.383102 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-04-13 00:57:27.383115 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-04-13 00:57:27.383127 | orchestrator | 2025-04-13 00:57:27.383153 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-13 00:57:27.383166 | orchestrator | Sunday 13 April 2025 00:45:32 +0000 (0:00:01.060) 0:01:06.611 ********** 2025-04-13 00:57:27.383178 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.383191 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.383203 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.383216 | orchestrator | 2025-04-13 00:57:27.383228 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-13 00:57:27.383240 | orchestrator | Sunday 13 April 2025 00:45:32 +0000 (0:00:00.629) 0:01:07.240 ********** 2025-04-13 00:57:27.383253 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.383265 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.383277 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.383290 | orchestrator | 2025-04-13 00:57:27.383302 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-13 00:57:27.383315 | orchestrator | Sunday 13 April 2025 00:45:33 +0000 (0:00:00.763) 0:01:08.004 ********** 2025-04-13 00:57:27.383327 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-13 00:57:27.383340 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.383352 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-13 00:57:27.383364 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.383377 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-13 00:57:27.383389 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.383401 | orchestrator | 2025-04-13 00:57:27.383414 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-13 00:57:27.383430 | orchestrator | Sunday 13 April 2025 00:45:34 +0000 (0:00:00.815) 0:01:08.820 ********** 2025-04-13 00:57:27.383443 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-13 00:57:27.383462 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.383475 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-13 00:57:27.383487 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.383500 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-13 00:57:27.383512 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.383524 | orchestrator | 2025-04-13 00:57:27.383541 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-13 00:57:27.383554 | orchestrator | Sunday 13 April 2025 00:45:35 +0000 (0:00:01.130) 0:01:09.951 ********** 2025-04-13 00:57:27.383566 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.383579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.383591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.383603 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-13 00:57:27.383616 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.383628 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-13 00:57:27.383640 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-13 00:57:27.383652 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.383665 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-13 00:57:27.383677 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-13 00:57:27.383695 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-13 00:57:27.383708 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.383721 | orchestrator | 2025-04-13 00:57:27.383734 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-04-13 00:57:27.383746 | orchestrator | Sunday 13 April 2025 00:45:36 +0000 (0:00:01.194) 0:01:11.145 ********** 2025-04-13 00:57:27.383758 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.383771 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.383783 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.383795 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.383808 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.383821 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.383833 | orchestrator | 2025-04-13 00:57:27.383846 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-04-13 00:57:27.383858 | orchestrator | Sunday 13 April 2025 00:45:37 +0000 (0:00:00.972) 0:01:12.118 ********** 2025-04-13 00:57:27.383871 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-13 00:57:27.383883 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-13 00:57:27.383896 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-13 00:57:27.383908 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-04-13 00:57:27.383920 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-13 00:57:27.383932 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-13 00:57:27.383944 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-13 00:57:27.383956 | orchestrator | 2025-04-13 00:57:27.383969 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-04-13 00:57:27.383981 | orchestrator | Sunday 13 April 2025 00:45:38 +0000 (0:00:01.034) 0:01:13.152 ********** 2025-04-13 00:57:27.383993 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-13 00:57:27.384006 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-13 00:57:27.384018 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-13 00:57:27.384037 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-04-13 00:57:27.384049 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-13 00:57:27.384061 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-13 00:57:27.384073 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-13 00:57:27.384086 | orchestrator | 2025-04-13 00:57:27.384098 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-13 00:57:27.384110 | orchestrator | Sunday 13 April 2025 00:45:41 +0000 (0:00:02.315) 0:01:15.467 ********** 2025-04-13 00:57:27.384122 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.384136 | orchestrator | 2025-04-13 00:57:27.384190 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-13 00:57:27.384203 | orchestrator | Sunday 13 April 2025 00:45:42 +0000 (0:00:01.761) 0:01:17.228 ********** 2025-04-13 00:57:27.384215 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.384228 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.384240 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.384252 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.384264 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.384276 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.384288 | orchestrator | 2025-04-13 00:57:27.384301 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-13 00:57:27.384313 | orchestrator | Sunday 13 April 2025 00:45:43 +0000 (0:00:00.948) 0:01:18.177 ********** 2025-04-13 00:57:27.384326 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.384338 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.384350 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.384362 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.384374 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.384387 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.384399 | orchestrator | 2025-04-13 00:57:27.384411 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-13 00:57:27.384424 | orchestrator | Sunday 13 April 2025 00:45:45 +0000 (0:00:01.511) 0:01:19.689 ********** 2025-04-13 00:57:27.384436 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.384448 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.384461 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.384473 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.384485 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.384495 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.384505 | orchestrator | 2025-04-13 00:57:27.384515 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-13 00:57:27.384525 | orchestrator | Sunday 13 April 2025 00:45:46 +0000 (0:00:01.415) 0:01:21.105 ********** 2025-04-13 00:57:27.384535 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.384545 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.384555 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.384565 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.384575 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.384585 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.384595 | orchestrator | 2025-04-13 00:57:27.384606 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-13 00:57:27.384616 | orchestrator | Sunday 13 April 2025 00:45:48 +0000 (0:00:01.567) 0:01:22.672 ********** 2025-04-13 00:57:27.384626 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.384636 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.384662 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.384673 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.384683 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.384693 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.384709 | orchestrator | 2025-04-13 00:57:27.384719 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-13 00:57:27.384729 | orchestrator | Sunday 13 April 2025 00:45:49 +0000 (0:00:00.971) 0:01:23.644 ********** 2025-04-13 00:57:27.384739 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.384749 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.384759 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.384769 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.384779 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.384789 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.384799 | orchestrator | 2025-04-13 00:57:27.384809 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-13 00:57:27.384819 | orchestrator | Sunday 13 April 2025 00:45:49 +0000 (0:00:00.574) 0:01:24.219 ********** 2025-04-13 00:57:27.384829 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.384839 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.384849 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.384858 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.384868 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.384878 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.384888 | orchestrator | 2025-04-13 00:57:27.384898 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-13 00:57:27.384908 | orchestrator | Sunday 13 April 2025 00:45:51 +0000 (0:00:01.142) 0:01:25.362 ********** 2025-04-13 00:57:27.384918 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.384928 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.384938 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.384947 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.384957 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.384967 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.384977 | orchestrator | 2025-04-13 00:57:27.384987 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-13 00:57:27.384997 | orchestrator | Sunday 13 April 2025 00:45:51 +0000 (0:00:00.656) 0:01:26.018 ********** 2025-04-13 00:57:27.385007 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.385016 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.385026 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.385036 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.385046 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.385056 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.385066 | orchestrator | 2025-04-13 00:57:27.385076 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-13 00:57:27.385086 | orchestrator | Sunday 13 April 2025 00:45:52 +0000 (0:00:01.068) 0:01:27.086 ********** 2025-04-13 00:57:27.385096 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.385105 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.385116 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.385125 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.385135 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.385160 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.385170 | orchestrator | 2025-04-13 00:57:27.385180 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-13 00:57:27.385190 | orchestrator | Sunday 13 April 2025 00:45:53 +0000 (0:00:00.799) 0:01:27.886 ********** 2025-04-13 00:57:27.385200 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.385210 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.385220 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.385230 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.385239 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.385249 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.385259 | orchestrator | 2025-04-13 00:57:27.385269 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-13 00:57:27.385279 | orchestrator | Sunday 13 April 2025 00:45:55 +0000 (0:00:01.422) 0:01:29.309 ********** 2025-04-13 00:57:27.385295 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.385305 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.385315 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.385325 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.385334 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.385344 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.385354 | orchestrator | 2025-04-13 00:57:27.385364 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-13 00:57:27.385374 | orchestrator | Sunday 13 April 2025 00:45:55 +0000 (0:00:00.766) 0:01:30.075 ********** 2025-04-13 00:57:27.385384 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.385394 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.385404 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.385413 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.385423 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.385433 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.385448 | orchestrator | 2025-04-13 00:57:27.385459 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-13 00:57:27.385469 | orchestrator | Sunday 13 April 2025 00:45:56 +0000 (0:00:01.185) 0:01:31.261 ********** 2025-04-13 00:57:27.385479 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.385490 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.385500 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.385510 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.385520 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.385530 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.385540 | orchestrator | 2025-04-13 00:57:27.385550 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-13 00:57:27.385559 | orchestrator | Sunday 13 April 2025 00:45:57 +0000 (0:00:00.971) 0:01:32.233 ********** 2025-04-13 00:57:27.385569 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.385579 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.385589 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.385599 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.385609 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.385619 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.385629 | orchestrator | 2025-04-13 00:57:27.385639 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-13 00:57:27.385654 | orchestrator | Sunday 13 April 2025 00:45:58 +0000 (0:00:00.670) 0:01:32.904 ********** 2025-04-13 00:57:27.385664 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.385674 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.385684 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.385694 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.385704 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.385714 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.385724 | orchestrator | 2025-04-13 00:57:27.385734 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-13 00:57:27.385744 | orchestrator | Sunday 13 April 2025 00:45:59 +0000 (0:00:00.865) 0:01:33.769 ********** 2025-04-13 00:57:27.385754 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.385763 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.385773 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.385783 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.385793 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.385803 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.385813 | orchestrator | 2025-04-13 00:57:27.385822 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-13 00:57:27.385832 | orchestrator | Sunday 13 April 2025 00:46:00 +0000 (0:00:00.594) 0:01:34.364 ********** 2025-04-13 00:57:27.385842 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.385852 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.385862 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.385877 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.385887 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.385897 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.385907 | orchestrator | 2025-04-13 00:57:27.385917 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-13 00:57:27.385927 | orchestrator | Sunday 13 April 2025 00:46:00 +0000 (0:00:00.820) 0:01:35.184 ********** 2025-04-13 00:57:27.385937 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.385947 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.385957 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.385966 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.385976 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.385986 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.385996 | orchestrator | 2025-04-13 00:57:27.386006 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-13 00:57:27.386191 | orchestrator | Sunday 13 April 2025 00:46:01 +0000 (0:00:00.608) 0:01:35.793 ********** 2025-04-13 00:57:27.386212 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.386223 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.386233 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.386243 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.386253 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.386262 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.386272 | orchestrator | 2025-04-13 00:57:27.386282 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-13 00:57:27.386298 | orchestrator | Sunday 13 April 2025 00:46:02 +0000 (0:00:00.854) 0:01:36.647 ********** 2025-04-13 00:57:27.386309 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.386319 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.386329 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.386339 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.386349 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.386359 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.386369 | orchestrator | 2025-04-13 00:57:27.386379 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-13 00:57:27.386389 | orchestrator | Sunday 13 April 2025 00:46:02 +0000 (0:00:00.629) 0:01:37.277 ********** 2025-04-13 00:57:27.386399 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.386408 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.386423 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.386434 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.386444 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.386453 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.386463 | orchestrator | 2025-04-13 00:57:27.386474 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-13 00:57:27.386484 | orchestrator | Sunday 13 April 2025 00:46:03 +0000 (0:00:00.812) 0:01:38.090 ********** 2025-04-13 00:57:27.386494 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.386504 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.386514 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.386524 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.386533 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.386543 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.386553 | orchestrator | 2025-04-13 00:57:27.386563 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-13 00:57:27.386573 | orchestrator | Sunday 13 April 2025 00:46:04 +0000 (0:00:00.620) 0:01:38.710 ********** 2025-04-13 00:57:27.386583 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.386593 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.386603 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.386613 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.386623 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.386633 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.386643 | orchestrator | 2025-04-13 00:57:27.386660 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-13 00:57:27.386670 | orchestrator | Sunday 13 April 2025 00:46:05 +0000 (0:00:01.077) 0:01:39.788 ********** 2025-04-13 00:57:27.386680 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.386690 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.386700 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.386710 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.386720 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.386730 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.386740 | orchestrator | 2025-04-13 00:57:27.386750 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-13 00:57:27.386760 | orchestrator | Sunday 13 April 2025 00:46:06 +0000 (0:00:00.751) 0:01:40.539 ********** 2025-04-13 00:57:27.386772 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.386784 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.386795 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.386806 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.386818 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.386829 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.386840 | orchestrator | 2025-04-13 00:57:27.386922 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-13 00:57:27.386938 | orchestrator | Sunday 13 April 2025 00:46:07 +0000 (0:00:00.982) 0:01:41.522 ********** 2025-04-13 00:57:27.386950 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.386961 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.386973 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.386984 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.386996 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.387007 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.387018 | orchestrator | 2025-04-13 00:57:27.387029 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-13 00:57:27.387041 | orchestrator | Sunday 13 April 2025 00:46:08 +0000 (0:00:00.793) 0:01:42.316 ********** 2025-04-13 00:57:27.387058 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.387069 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.387080 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.387092 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.387103 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.387114 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.387125 | orchestrator | 2025-04-13 00:57:27.387135 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-13 00:57:27.387189 | orchestrator | Sunday 13 April 2025 00:46:08 +0000 (0:00:00.859) 0:01:43.175 ********** 2025-04-13 00:57:27.387200 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.387209 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.387219 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.387229 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.387239 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.387249 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.387259 | orchestrator | 2025-04-13 00:57:27.387269 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-13 00:57:27.387279 | orchestrator | Sunday 13 April 2025 00:46:09 +0000 (0:00:00.693) 0:01:43.868 ********** 2025-04-13 00:57:27.387289 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.387299 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.387308 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.387318 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.387328 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.387343 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.387353 | orchestrator | 2025-04-13 00:57:27.387363 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-13 00:57:27.387380 | orchestrator | Sunday 13 April 2025 00:46:10 +0000 (0:00:00.963) 0:01:44.832 ********** 2025-04-13 00:57:27.387390 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.387400 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.387410 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.387420 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.387430 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.387440 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.387450 | orchestrator | 2025-04-13 00:57:27.387460 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-13 00:57:27.387469 | orchestrator | Sunday 13 April 2025 00:46:11 +0000 (0:00:00.658) 0:01:45.491 ********** 2025-04-13 00:57:27.387477 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.387486 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.387494 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.387503 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.387511 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.387520 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.387528 | orchestrator | 2025-04-13 00:57:27.387536 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-13 00:57:27.387546 | orchestrator | Sunday 13 April 2025 00:46:12 +0000 (0:00:00.859) 0:01:46.350 ********** 2025-04-13 00:57:27.387554 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-13 00:57:27.387563 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-13 00:57:27.387571 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.387580 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-13 00:57:27.387588 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-13 00:57:27.387597 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.387605 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-13 00:57:27.387614 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-13 00:57:27.387622 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.387631 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-13 00:57:27.387639 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-13 00:57:27.387648 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-13 00:57:27.387656 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-13 00:57:27.387665 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.387673 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.387682 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-13 00:57:27.387694 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-13 00:57:27.387703 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.387711 | orchestrator | 2025-04-13 00:57:27.387720 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-13 00:57:27.387728 | orchestrator | Sunday 13 April 2025 00:46:12 +0000 (0:00:00.676) 0:01:47.027 ********** 2025-04-13 00:57:27.387737 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-13 00:57:27.387745 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-13 00:57:27.387754 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.387763 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-13 00:57:27.387771 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-13 00:57:27.387779 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.387788 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-13 00:57:27.387797 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-13 00:57:27.387857 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.387870 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-13 00:57:27.387879 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-13 00:57:27.387887 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.387896 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-13 00:57:27.387910 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-13 00:57:27.387918 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.387927 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-13 00:57:27.387936 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-13 00:57:27.387944 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.387953 | orchestrator | 2025-04-13 00:57:27.387961 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-13 00:57:27.387970 | orchestrator | Sunday 13 April 2025 00:46:13 +0000 (0:00:00.896) 0:01:47.924 ********** 2025-04-13 00:57:27.387978 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.387987 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.387996 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.388004 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.388013 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.388021 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.388030 | orchestrator | 2025-04-13 00:57:27.388038 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-13 00:57:27.388047 | orchestrator | Sunday 13 April 2025 00:46:14 +0000 (0:00:00.646) 0:01:48.571 ********** 2025-04-13 00:57:27.388055 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.388064 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.388072 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.388081 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.388089 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.388098 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.388106 | orchestrator | 2025-04-13 00:57:27.388115 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-13 00:57:27.388124 | orchestrator | Sunday 13 April 2025 00:46:15 +0000 (0:00:00.806) 0:01:49.377 ********** 2025-04-13 00:57:27.388133 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.388154 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.388164 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.388172 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.388180 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.388189 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.388197 | orchestrator | 2025-04-13 00:57:27.388206 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-13 00:57:27.388214 | orchestrator | Sunday 13 April 2025 00:46:15 +0000 (0:00:00.615) 0:01:49.993 ********** 2025-04-13 00:57:27.388223 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.388231 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.388240 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.388248 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.388256 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.388265 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.388273 | orchestrator | 2025-04-13 00:57:27.388282 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-13 00:57:27.388291 | orchestrator | Sunday 13 April 2025 00:46:16 +0000 (0:00:00.869) 0:01:50.862 ********** 2025-04-13 00:57:27.388299 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.388308 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.388321 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.388330 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.388338 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.388347 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.388356 | orchestrator | 2025-04-13 00:57:27.388367 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-13 00:57:27.388376 | orchestrator | Sunday 13 April 2025 00:46:17 +0000 (0:00:00.645) 0:01:51.508 ********** 2025-04-13 00:57:27.388385 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.388402 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.388410 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.388419 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.388427 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.388436 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.388444 | orchestrator | 2025-04-13 00:57:27.388452 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-13 00:57:27.388461 | orchestrator | Sunday 13 April 2025 00:46:18 +0000 (0:00:00.876) 0:01:52.384 ********** 2025-04-13 00:57:27.388469 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-13 00:57:27.388478 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-13 00:57:27.388487 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-13 00:57:27.388497 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.388506 | orchestrator | 2025-04-13 00:57:27.388516 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-13 00:57:27.388525 | orchestrator | Sunday 13 April 2025 00:46:18 +0000 (0:00:00.431) 0:01:52.816 ********** 2025-04-13 00:57:27.388535 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-13 00:57:27.388544 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-13 00:57:27.388554 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-13 00:57:27.388563 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.388573 | orchestrator | 2025-04-13 00:57:27.388583 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-13 00:57:27.388593 | orchestrator | Sunday 13 April 2025 00:46:18 +0000 (0:00:00.429) 0:01:53.245 ********** 2025-04-13 00:57:27.388603 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-13 00:57:27.388612 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-13 00:57:27.388622 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-13 00:57:27.388683 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.388696 | orchestrator | 2025-04-13 00:57:27.388706 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-13 00:57:27.388715 | orchestrator | Sunday 13 April 2025 00:46:19 +0000 (0:00:00.434) 0:01:53.680 ********** 2025-04-13 00:57:27.388725 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.388733 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.388742 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.388750 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.388759 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.388767 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.388776 | orchestrator | 2025-04-13 00:57:27.388784 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-13 00:57:27.388793 | orchestrator | Sunday 13 April 2025 00:46:19 +0000 (0:00:00.587) 0:01:54.267 ********** 2025-04-13 00:57:27.388801 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-13 00:57:27.388810 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.388818 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-13 00:57:27.388827 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.388835 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-13 00:57:27.388844 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.388852 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-13 00:57:27.388860 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.388869 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-13 00:57:27.388877 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.388886 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-13 00:57:27.388894 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.388902 | orchestrator | 2025-04-13 00:57:27.388911 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-13 00:57:27.388925 | orchestrator | Sunday 13 April 2025 00:46:20 +0000 (0:00:01.023) 0:01:55.290 ********** 2025-04-13 00:57:27.388934 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.388942 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.388951 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.388959 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.388968 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.388976 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.388984 | orchestrator | 2025-04-13 00:57:27.388993 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-13 00:57:27.389002 | orchestrator | Sunday 13 April 2025 00:46:21 +0000 (0:00:00.599) 0:01:55.890 ********** 2025-04-13 00:57:27.389010 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.389019 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.389027 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.389035 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.389044 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.389052 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.389060 | orchestrator | 2025-04-13 00:57:27.389069 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-13 00:57:27.389077 | orchestrator | Sunday 13 April 2025 00:46:22 +0000 (0:00:00.823) 0:01:56.713 ********** 2025-04-13 00:57:27.389086 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-13 00:57:27.389095 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.389104 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-13 00:57:27.389112 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.389121 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-13 00:57:27.389129 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.389151 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-13 00:57:27.389161 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.389169 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-13 00:57:27.389178 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.389186 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-13 00:57:27.389195 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.389203 | orchestrator | 2025-04-13 00:57:27.389212 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-13 00:57:27.389220 | orchestrator | Sunday 13 April 2025 00:46:23 +0000 (0:00:00.795) 0:01:57.508 ********** 2025-04-13 00:57:27.389229 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.389237 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.389245 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.389254 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-13 00:57:27.389263 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.389276 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-13 00:57:27.389285 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.389293 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-13 00:57:27.389302 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.389311 | orchestrator | 2025-04-13 00:57:27.389319 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-13 00:57:27.389328 | orchestrator | Sunday 13 April 2025 00:46:24 +0000 (0:00:00.829) 0:01:58.338 ********** 2025-04-13 00:57:27.389336 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-13 00:57:27.389345 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-13 00:57:27.389353 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-13 00:57:27.389362 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.389370 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-13 00:57:27.389383 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-13 00:57:27.389392 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-13 00:57:27.389400 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.389413 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-13 00:57:27.389470 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-13 00:57:27.389482 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-13 00:57:27.389490 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.389504 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.389513 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.389522 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.389530 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.389539 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-13 00:57:27.389548 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-13 00:57:27.389556 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-13 00:57:27.389565 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.389573 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-13 00:57:27.389582 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-13 00:57:27.389590 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-13 00:57:27.389598 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.389607 | orchestrator | 2025-04-13 00:57:27.389615 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-13 00:57:27.389623 | orchestrator | Sunday 13 April 2025 00:46:25 +0000 (0:00:01.581) 0:01:59.920 ********** 2025-04-13 00:57:27.389632 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.389640 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.389649 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.389751 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.389762 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.389771 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.389779 | orchestrator | 2025-04-13 00:57:27.389787 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-13 00:57:27.389796 | orchestrator | Sunday 13 April 2025 00:46:26 +0000 (0:00:01.315) 0:02:01.235 ********** 2025-04-13 00:57:27.389804 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.389812 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.389821 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.389829 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-13 00:57:27.389838 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.389846 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-13 00:57:27.389854 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.389863 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-13 00:57:27.389871 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.389880 | orchestrator | 2025-04-13 00:57:27.389888 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-13 00:57:27.389897 | orchestrator | Sunday 13 April 2025 00:46:28 +0000 (0:00:01.538) 0:02:02.774 ********** 2025-04-13 00:57:27.389905 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.389913 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.389922 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.389930 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.389938 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.389947 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.389955 | orchestrator | 2025-04-13 00:57:27.389964 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-13 00:57:27.389972 | orchestrator | Sunday 13 April 2025 00:46:29 +0000 (0:00:01.264) 0:02:04.038 ********** 2025-04-13 00:57:27.389987 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.389996 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.390004 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.390012 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.390052 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.390061 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.390069 | orchestrator | 2025-04-13 00:57:27.390078 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-04-13 00:57:27.390086 | orchestrator | Sunday 13 April 2025 00:46:31 +0000 (0:00:01.491) 0:02:05.530 ********** 2025-04-13 00:57:27.390309 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.390325 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.390334 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.390342 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.390350 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.390359 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.390367 | orchestrator | 2025-04-13 00:57:27.390381 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-04-13 00:57:27.390390 | orchestrator | Sunday 13 April 2025 00:46:33 +0000 (0:00:01.869) 0:02:07.400 ********** 2025-04-13 00:57:27.390399 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.390407 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.390415 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.390424 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.390432 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.390441 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.390449 | orchestrator | 2025-04-13 00:57:27.390458 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-04-13 00:57:27.390466 | orchestrator | Sunday 13 April 2025 00:46:35 +0000 (0:00:01.915) 0:02:09.315 ********** 2025-04-13 00:57:27.390475 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.390485 | orchestrator | 2025-04-13 00:57:27.390493 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-04-13 00:57:27.390502 | orchestrator | Sunday 13 April 2025 00:46:36 +0000 (0:00:01.235) 0:02:10.551 ********** 2025-04-13 00:57:27.390510 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.390518 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.390527 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.390535 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.390544 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.390552 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.390561 | orchestrator | 2025-04-13 00:57:27.390639 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-04-13 00:57:27.390652 | orchestrator | Sunday 13 April 2025 00:46:37 +0000 (0:00:00.827) 0:02:11.379 ********** 2025-04-13 00:57:27.390661 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.390670 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.390678 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.390687 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.390695 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.390709 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.390718 | orchestrator | 2025-04-13 00:57:27.390727 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-04-13 00:57:27.390735 | orchestrator | Sunday 13 April 2025 00:46:37 +0000 (0:00:00.637) 0:02:12.016 ********** 2025-04-13 00:57:27.390744 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-13 00:57:27.390752 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-13 00:57:27.390761 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-13 00:57:27.390778 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-13 00:57:27.390786 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-13 00:57:27.390795 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-13 00:57:27.390804 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-13 00:57:27.390812 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-13 00:57:27.390820 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-13 00:57:27.390829 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-13 00:57:27.390837 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-13 00:57:27.390846 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-13 00:57:27.390854 | orchestrator | 2025-04-13 00:57:27.390862 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-04-13 00:57:27.390871 | orchestrator | Sunday 13 April 2025 00:46:39 +0000 (0:00:01.582) 0:02:13.599 ********** 2025-04-13 00:57:27.390879 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.390888 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.390896 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.390905 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.390913 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.390922 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.390930 | orchestrator | 2025-04-13 00:57:27.390939 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-04-13 00:57:27.390947 | orchestrator | Sunday 13 April 2025 00:46:40 +0000 (0:00:01.123) 0:02:14.722 ********** 2025-04-13 00:57:27.390956 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.390964 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.390973 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.390981 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.390990 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.390998 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.391006 | orchestrator | 2025-04-13 00:57:27.391015 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-04-13 00:57:27.391023 | orchestrator | Sunday 13 April 2025 00:46:41 +0000 (0:00:01.190) 0:02:15.913 ********** 2025-04-13 00:57:27.391032 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.391041 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.391049 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.391057 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.391066 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.391074 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.391083 | orchestrator | 2025-04-13 00:57:27.391091 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-04-13 00:57:27.391100 | orchestrator | Sunday 13 April 2025 00:46:42 +0000 (0:00:00.773) 0:02:16.686 ********** 2025-04-13 00:57:27.391109 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.391117 | orchestrator | 2025-04-13 00:57:27.391126 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image] *** 2025-04-13 00:57:27.391135 | orchestrator | Sunday 13 April 2025 00:46:43 +0000 (0:00:01.365) 0:02:18.051 ********** 2025-04-13 00:57:27.391159 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.391168 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.391177 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.391185 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.391193 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.391202 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.391218 | orchestrator | 2025-04-13 00:57:27.391230 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-04-13 00:57:27.391239 | orchestrator | Sunday 13 April 2025 00:47:18 +0000 (0:00:35.187) 0:02:53.238 ********** 2025-04-13 00:57:27.391248 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-13 00:57:27.391256 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-13 00:57:27.391266 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-13 00:57:27.391276 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.391285 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-13 00:57:27.391294 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-13 00:57:27.391357 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-13 00:57:27.391369 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.391379 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-13 00:57:27.391389 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-13 00:57:27.391398 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-13 00:57:27.391408 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.391417 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-13 00:57:27.391427 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-13 00:57:27.391436 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-13 00:57:27.391446 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.391455 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-13 00:57:27.391465 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-13 00:57:27.391475 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-13 00:57:27.391484 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.391493 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-13 00:57:27.391502 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-13 00:57:27.391512 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-13 00:57:27.391522 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.391531 | orchestrator | 2025-04-13 00:57:27.391541 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-04-13 00:57:27.391550 | orchestrator | Sunday 13 April 2025 00:47:19 +0000 (0:00:00.948) 0:02:54.187 ********** 2025-04-13 00:57:27.391560 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.391569 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.391579 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.391588 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.391597 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.391607 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.391617 | orchestrator | 2025-04-13 00:57:27.391626 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-04-13 00:57:27.391635 | orchestrator | Sunday 13 April 2025 00:47:20 +0000 (0:00:00.715) 0:02:54.903 ********** 2025-04-13 00:57:27.391643 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.391651 | orchestrator | 2025-04-13 00:57:27.391660 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-04-13 00:57:27.391668 | orchestrator | Sunday 13 April 2025 00:47:20 +0000 (0:00:00.174) 0:02:55.077 ********** 2025-04-13 00:57:27.391676 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.391684 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.391693 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.391707 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.391716 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.391724 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.391732 | orchestrator | 2025-04-13 00:57:27.391741 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-04-13 00:57:27.391749 | orchestrator | Sunday 13 April 2025 00:47:21 +0000 (0:00:01.045) 0:02:56.123 ********** 2025-04-13 00:57:27.391758 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.391766 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.391775 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.391783 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.391791 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.391800 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.391808 | orchestrator | 2025-04-13 00:57:27.391816 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-04-13 00:57:27.391825 | orchestrator | Sunday 13 April 2025 00:47:22 +0000 (0:00:01.035) 0:02:57.158 ********** 2025-04-13 00:57:27.391833 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.391846 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.391854 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.391862 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.391871 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.391879 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.391887 | orchestrator | 2025-04-13 00:57:27.391896 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-04-13 00:57:27.391908 | orchestrator | Sunday 13 April 2025 00:47:23 +0000 (0:00:00.891) 0:02:58.049 ********** 2025-04-13 00:57:27.391916 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.391925 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.391933 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.391942 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.391950 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.391958 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.391967 | orchestrator | 2025-04-13 00:57:27.391975 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-04-13 00:57:27.391984 | orchestrator | Sunday 13 April 2025 00:47:25 +0000 (0:00:01.842) 0:02:59.892 ********** 2025-04-13 00:57:27.391992 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.392000 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.392008 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.392017 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.392025 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.392033 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.392042 | orchestrator | 2025-04-13 00:57:27.392050 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-04-13 00:57:27.392059 | orchestrator | Sunday 13 April 2025 00:47:26 +0000 (0:00:00.672) 0:03:00.564 ********** 2025-04-13 00:57:27.392068 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.392077 | orchestrator | 2025-04-13 00:57:27.392132 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-04-13 00:57:27.392192 | orchestrator | Sunday 13 April 2025 00:47:27 +0000 (0:00:01.146) 0:03:01.711 ********** 2025-04-13 00:57:27.392201 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.392210 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.392219 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.392227 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.392235 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.392244 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.392252 | orchestrator | 2025-04-13 00:57:27.392261 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-04-13 00:57:27.392269 | orchestrator | Sunday 13 April 2025 00:47:28 +0000 (0:00:00.701) 0:03:02.413 ********** 2025-04-13 00:57:27.392286 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.392294 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.392302 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.392310 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.392317 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.392325 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.392333 | orchestrator | 2025-04-13 00:57:27.392341 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-04-13 00:57:27.392349 | orchestrator | Sunday 13 April 2025 00:47:28 +0000 (0:00:00.603) 0:03:03.016 ********** 2025-04-13 00:57:27.392357 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.392365 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.392373 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.392381 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.392389 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.392397 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.392405 | orchestrator | 2025-04-13 00:57:27.392413 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-04-13 00:57:27.392421 | orchestrator | Sunday 13 April 2025 00:47:29 +0000 (0:00:00.901) 0:03:03.917 ********** 2025-04-13 00:57:27.392428 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.392437 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.392445 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.392453 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.392461 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.392468 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.392476 | orchestrator | 2025-04-13 00:57:27.392484 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-04-13 00:57:27.392492 | orchestrator | Sunday 13 April 2025 00:47:30 +0000 (0:00:00.688) 0:03:04.606 ********** 2025-04-13 00:57:27.392500 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.392507 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.392515 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.392523 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.392531 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.392539 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.392547 | orchestrator | 2025-04-13 00:57:27.392555 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-04-13 00:57:27.392563 | orchestrator | Sunday 13 April 2025 00:47:31 +0000 (0:00:00.973) 0:03:05.580 ********** 2025-04-13 00:57:27.392571 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.392578 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.392586 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.392594 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.392606 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.392614 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.392622 | orchestrator | 2025-04-13 00:57:27.392630 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-04-13 00:57:27.392638 | orchestrator | Sunday 13 April 2025 00:47:32 +0000 (0:00:00.723) 0:03:06.303 ********** 2025-04-13 00:57:27.392646 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.392654 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.392662 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.392670 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.392678 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.392686 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.392693 | orchestrator | 2025-04-13 00:57:27.392701 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-04-13 00:57:27.392709 | orchestrator | Sunday 13 April 2025 00:47:32 +0000 (0:00:00.860) 0:03:07.163 ********** 2025-04-13 00:57:27.392717 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.392725 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.392733 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.392746 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.392755 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.392763 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.392772 | orchestrator | 2025-04-13 00:57:27.392781 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-13 00:57:27.392790 | orchestrator | Sunday 13 April 2025 00:47:34 +0000 (0:00:01.141) 0:03:08.304 ********** 2025-04-13 00:57:27.392799 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.392808 | orchestrator | 2025-04-13 00:57:27.392818 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-04-13 00:57:27.392826 | orchestrator | Sunday 13 April 2025 00:47:35 +0000 (0:00:01.100) 0:03:09.405 ********** 2025-04-13 00:57:27.392835 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-04-13 00:57:27.392844 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-04-13 00:57:27.392853 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-04-13 00:57:27.392861 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-04-13 00:57:27.392870 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-04-13 00:57:27.392880 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-04-13 00:57:27.392888 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-04-13 00:57:27.392897 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-04-13 00:57:27.392958 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-04-13 00:57:27.392970 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-04-13 00:57:27.392979 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-04-13 00:57:27.392988 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-04-13 00:57:27.392998 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-04-13 00:57:27.393007 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-04-13 00:57:27.393016 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-04-13 00:57:27.393025 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-04-13 00:57:27.393034 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-04-13 00:57:27.393042 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-04-13 00:57:27.393051 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-04-13 00:57:27.393060 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-04-13 00:57:27.393068 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-04-13 00:57:27.393077 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-04-13 00:57:27.393085 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-04-13 00:57:27.393094 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-04-13 00:57:27.393103 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-04-13 00:57:27.393112 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-04-13 00:57:27.393120 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-04-13 00:57:27.393129 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-04-13 00:57:27.393151 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-04-13 00:57:27.393160 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-04-13 00:57:27.393168 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-04-13 00:57:27.393175 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-04-13 00:57:27.393183 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-04-13 00:57:27.393191 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-04-13 00:57:27.393199 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-04-13 00:57:27.393213 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-04-13 00:57:27.393224 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-04-13 00:57:27.393232 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-04-13 00:57:27.393240 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-04-13 00:57:27.393248 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-04-13 00:57:27.393256 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-04-13 00:57:27.393264 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-04-13 00:57:27.393271 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-13 00:57:27.393279 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-13 00:57:27.393287 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-13 00:57:27.393295 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-13 00:57:27.393303 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-13 00:57:27.393311 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-13 00:57:27.393318 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-13 00:57:27.393326 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-13 00:57:27.393334 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-13 00:57:27.393342 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-13 00:57:27.393350 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-13 00:57:27.393358 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-13 00:57:27.393366 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-13 00:57:27.393373 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-13 00:57:27.393381 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-13 00:57:27.393389 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-13 00:57:27.393397 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-13 00:57:27.393405 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-13 00:57:27.393413 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-13 00:57:27.393421 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-13 00:57:27.393429 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-13 00:57:27.393436 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-13 00:57:27.393445 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-13 00:57:27.393452 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-13 00:57:27.393460 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-13 00:57:27.393514 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-13 00:57:27.393525 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-13 00:57:27.393533 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-13 00:57:27.393541 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-13 00:57:27.393549 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-13 00:57:27.393561 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-13 00:57:27.393569 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-13 00:57:27.393577 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-13 00:57:27.393589 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-13 00:57:27.393597 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-13 00:57:27.393605 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-13 00:57:27.393613 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-04-13 00:57:27.393621 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-04-13 00:57:27.393629 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-04-13 00:57:27.393637 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-04-13 00:57:27.393644 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-04-13 00:57:27.393652 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-04-13 00:57:27.393660 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-04-13 00:57:27.393668 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-04-13 00:57:27.393675 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-04-13 00:57:27.393684 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-04-13 00:57:27.393691 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-04-13 00:57:27.393699 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-04-13 00:57:27.393707 | orchestrator | 2025-04-13 00:57:27.393715 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-13 00:57:27.393726 | orchestrator | Sunday 13 April 2025 00:47:40 +0000 (0:00:05.500) 0:03:14.906 ********** 2025-04-13 00:57:27.393734 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.393742 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.393750 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.393758 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.393767 | orchestrator | 2025-04-13 00:57:27.393775 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-04-13 00:57:27.393782 | orchestrator | Sunday 13 April 2025 00:47:41 +0000 (0:00:01.381) 0:03:16.288 ********** 2025-04-13 00:57:27.393790 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-04-13 00:57:27.393798 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-04-13 00:57:27.393806 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-04-13 00:57:27.393814 | orchestrator | 2025-04-13 00:57:27.393822 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-04-13 00:57:27.393830 | orchestrator | Sunday 13 April 2025 00:47:43 +0000 (0:00:01.357) 0:03:17.645 ********** 2025-04-13 00:57:27.393838 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-04-13 00:57:27.393846 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-04-13 00:57:27.393854 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-04-13 00:57:27.393862 | orchestrator | 2025-04-13 00:57:27.393870 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-13 00:57:27.393877 | orchestrator | Sunday 13 April 2025 00:47:44 +0000 (0:00:01.135) 0:03:18.781 ********** 2025-04-13 00:57:27.393885 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.393893 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.393901 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.393909 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.393922 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.393930 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.393938 | orchestrator | 2025-04-13 00:57:27.393946 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-13 00:57:27.393954 | orchestrator | Sunday 13 April 2025 00:47:45 +0000 (0:00:00.892) 0:03:19.673 ********** 2025-04-13 00:57:27.393961 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.393969 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.393977 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.393985 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.393993 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.394001 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.394009 | orchestrator | 2025-04-13 00:57:27.394037 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-13 00:57:27.394048 | orchestrator | Sunday 13 April 2025 00:47:46 +0000 (0:00:00.721) 0:03:20.395 ********** 2025-04-13 00:57:27.394056 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.394108 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.394119 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.394128 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.394136 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.394158 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.394166 | orchestrator | 2025-04-13 00:57:27.394174 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-13 00:57:27.394182 | orchestrator | Sunday 13 April 2025 00:47:47 +0000 (0:00:01.115) 0:03:21.510 ********** 2025-04-13 00:57:27.394190 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.394198 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.394206 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.394214 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.394222 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.394230 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.394238 | orchestrator | 2025-04-13 00:57:27.394246 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-13 00:57:27.394254 | orchestrator | Sunday 13 April 2025 00:47:47 +0000 (0:00:00.738) 0:03:22.249 ********** 2025-04-13 00:57:27.394261 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.394269 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.394277 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.394286 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.394294 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.394302 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.394310 | orchestrator | 2025-04-13 00:57:27.394318 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-13 00:57:27.394326 | orchestrator | Sunday 13 April 2025 00:47:48 +0000 (0:00:01.006) 0:03:23.255 ********** 2025-04-13 00:57:27.394334 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.394341 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.394349 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.394357 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.394365 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.394379 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.394387 | orchestrator | 2025-04-13 00:57:27.394396 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-13 00:57:27.394404 | orchestrator | Sunday 13 April 2025 00:47:49 +0000 (0:00:00.715) 0:03:23.971 ********** 2025-04-13 00:57:27.394412 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.394420 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.394428 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.394436 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.394444 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.394452 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.394460 | orchestrator | 2025-04-13 00:57:27.394476 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-13 00:57:27.394484 | orchestrator | Sunday 13 April 2025 00:47:50 +0000 (0:00:00.892) 0:03:24.863 ********** 2025-04-13 00:57:27.394492 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.394500 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.394507 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.394516 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.394523 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.394531 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.394539 | orchestrator | 2025-04-13 00:57:27.394547 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-13 00:57:27.394555 | orchestrator | Sunday 13 April 2025 00:47:51 +0000 (0:00:00.648) 0:03:25.512 ********** 2025-04-13 00:57:27.394563 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.394571 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.394578 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.394586 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.394594 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.394602 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.394610 | orchestrator | 2025-04-13 00:57:27.394618 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-13 00:57:27.394626 | orchestrator | Sunday 13 April 2025 00:47:53 +0000 (0:00:02.315) 0:03:27.827 ********** 2025-04-13 00:57:27.394634 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.394642 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.394650 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.394657 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.394665 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.394673 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.394681 | orchestrator | 2025-04-13 00:57:27.394689 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-13 00:57:27.394696 | orchestrator | Sunday 13 April 2025 00:47:54 +0000 (0:00:00.683) 0:03:28.511 ********** 2025-04-13 00:57:27.394704 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-13 00:57:27.394712 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-13 00:57:27.394720 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.394728 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-13 00:57:27.394740 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-13 00:57:27.394749 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.394758 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-13 00:57:27.394767 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-13 00:57:27.394776 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.394785 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-13 00:57:27.394795 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-13 00:57:27.394804 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.394813 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-13 00:57:27.394822 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-13 00:57:27.394831 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.394839 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-13 00:57:27.394847 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-13 00:57:27.394855 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.394863 | orchestrator | 2025-04-13 00:57:27.394871 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-13 00:57:27.394925 | orchestrator | Sunday 13 April 2025 00:47:55 +0000 (0:00:01.001) 0:03:29.512 ********** 2025-04-13 00:57:27.394937 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-13 00:57:27.394949 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-13 00:57:27.394957 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.394965 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-13 00:57:27.394978 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-13 00:57:27.394986 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.394994 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-13 00:57:27.395002 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-13 00:57:27.395010 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.395018 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-04-13 00:57:27.395026 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-04-13 00:57:27.395034 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-04-13 00:57:27.395042 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-04-13 00:57:27.395050 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-04-13 00:57:27.395057 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-04-13 00:57:27.395065 | orchestrator | 2025-04-13 00:57:27.395073 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-13 00:57:27.395081 | orchestrator | Sunday 13 April 2025 00:47:56 +0000 (0:00:00.811) 0:03:30.324 ********** 2025-04-13 00:57:27.395089 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.395097 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.395104 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.395112 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.395120 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.395128 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.395136 | orchestrator | 2025-04-13 00:57:27.395158 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-13 00:57:27.395166 | orchestrator | Sunday 13 April 2025 00:47:57 +0000 (0:00:01.157) 0:03:31.481 ********** 2025-04-13 00:57:27.395174 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.395182 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.395189 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.395198 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.395205 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.395213 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.395221 | orchestrator | 2025-04-13 00:57:27.395229 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-13 00:57:27.395237 | orchestrator | Sunday 13 April 2025 00:47:57 +0000 (0:00:00.662) 0:03:32.144 ********** 2025-04-13 00:57:27.395245 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.395253 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.395261 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.395268 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.395280 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.395288 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.395295 | orchestrator | 2025-04-13 00:57:27.395303 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-13 00:57:27.395311 | orchestrator | Sunday 13 April 2025 00:47:58 +0000 (0:00:00.930) 0:03:33.074 ********** 2025-04-13 00:57:27.395319 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.395327 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.395335 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.395343 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.395351 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.395358 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.395366 | orchestrator | 2025-04-13 00:57:27.395378 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-13 00:57:27.395389 | orchestrator | Sunday 13 April 2025 00:47:59 +0000 (0:00:00.634) 0:03:33.709 ********** 2025-04-13 00:57:27.395401 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.395414 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.395427 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.395446 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.395458 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.395471 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.395479 | orchestrator | 2025-04-13 00:57:27.395487 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-13 00:57:27.395495 | orchestrator | Sunday 13 April 2025 00:48:00 +0000 (0:00:00.917) 0:03:34.627 ********** 2025-04-13 00:57:27.395502 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.395510 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.395518 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.395526 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.395534 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.395542 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.395550 | orchestrator | 2025-04-13 00:57:27.395558 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-13 00:57:27.395567 | orchestrator | Sunday 13 April 2025 00:48:01 +0000 (0:00:01.038) 0:03:35.666 ********** 2025-04-13 00:57:27.395575 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-13 00:57:27.395585 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-13 00:57:27.395593 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-13 00:57:27.395602 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.395611 | orchestrator | 2025-04-13 00:57:27.395619 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-13 00:57:27.395628 | orchestrator | Sunday 13 April 2025 00:48:02 +0000 (0:00:01.013) 0:03:36.679 ********** 2025-04-13 00:57:27.395636 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-13 00:57:27.395645 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-13 00:57:27.395654 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-13 00:57:27.395663 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.395672 | orchestrator | 2025-04-13 00:57:27.395736 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-13 00:57:27.395747 | orchestrator | Sunday 13 April 2025 00:48:02 +0000 (0:00:00.464) 0:03:37.144 ********** 2025-04-13 00:57:27.395757 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-13 00:57:27.395766 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-13 00:57:27.395775 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-13 00:57:27.395784 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.395793 | orchestrator | 2025-04-13 00:57:27.395802 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-13 00:57:27.395811 | orchestrator | Sunday 13 April 2025 00:48:03 +0000 (0:00:00.509) 0:03:37.653 ********** 2025-04-13 00:57:27.395820 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.395828 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.395837 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.395846 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.395855 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.395864 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.395873 | orchestrator | 2025-04-13 00:57:27.395882 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-13 00:57:27.395891 | orchestrator | Sunday 13 April 2025 00:48:04 +0000 (0:00:00.695) 0:03:38.348 ********** 2025-04-13 00:57:27.395900 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-13 00:57:27.395909 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.395918 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-13 00:57:27.395926 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.395934 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-13 00:57:27.395942 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.395950 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-13 00:57:27.395958 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-04-13 00:57:27.395971 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-04-13 00:57:27.395979 | orchestrator | 2025-04-13 00:57:27.395987 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-13 00:57:27.395995 | orchestrator | Sunday 13 April 2025 00:48:05 +0000 (0:00:01.005) 0:03:39.354 ********** 2025-04-13 00:57:27.396002 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.396010 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.396018 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.396026 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.396034 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.396042 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.396050 | orchestrator | 2025-04-13 00:57:27.396058 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-13 00:57:27.396066 | orchestrator | Sunday 13 April 2025 00:48:05 +0000 (0:00:00.580) 0:03:39.934 ********** 2025-04-13 00:57:27.396074 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.396082 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.396089 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.396097 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.396105 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.396113 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.396121 | orchestrator | 2025-04-13 00:57:27.396129 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-13 00:57:27.396182 | orchestrator | Sunday 13 April 2025 00:48:06 +0000 (0:00:00.830) 0:03:40.765 ********** 2025-04-13 00:57:27.396193 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-13 00:57:27.396201 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.396209 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-13 00:57:27.396217 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.396225 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-13 00:57:27.396233 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.396241 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-13 00:57:27.396249 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.396263 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-13 00:57:27.396270 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.396277 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-13 00:57:27.396284 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.396291 | orchestrator | 2025-04-13 00:57:27.396298 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-13 00:57:27.396305 | orchestrator | Sunday 13 April 2025 00:48:07 +0000 (0:00:00.903) 0:03:41.668 ********** 2025-04-13 00:57:27.396312 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.396319 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.396326 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.396333 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-13 00:57:27.396340 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.396347 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-13 00:57:27.396354 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.396361 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-13 00:57:27.396369 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.396375 | orchestrator | 2025-04-13 00:57:27.396383 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-13 00:57:27.396390 | orchestrator | Sunday 13 April 2025 00:48:08 +0000 (0:00:01.010) 0:03:42.679 ********** 2025-04-13 00:57:27.396396 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-13 00:57:27.396404 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-13 00:57:27.396416 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-13 00:57:27.396423 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.396430 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-13 00:57:27.396482 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-13 00:57:27.396492 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-13 00:57:27.396499 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.396506 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-13 00:57:27.396513 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-13 00:57:27.396520 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-13 00:57:27.396527 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.396534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.396541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.396548 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-13 00:57:27.396555 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.396561 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.396568 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-13 00:57:27.396575 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-13 00:57:27.396582 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-13 00:57:27.396589 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.396596 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-13 00:57:27.396603 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-13 00:57:27.396609 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.396616 | orchestrator | 2025-04-13 00:57:27.396623 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-13 00:57:27.396630 | orchestrator | Sunday 13 April 2025 00:48:10 +0000 (0:00:01.813) 0:03:44.493 ********** 2025-04-13 00:57:27.396637 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.396644 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.396651 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.396658 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.396664 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.396671 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.396678 | orchestrator | 2025-04-13 00:57:27.396685 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-13 00:57:27.396692 | orchestrator | Sunday 13 April 2025 00:48:15 +0000 (0:00:05.416) 0:03:49.909 ********** 2025-04-13 00:57:27.396699 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.396706 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.396713 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.396720 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.396727 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.396734 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.396741 | orchestrator | 2025-04-13 00:57:27.396748 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-04-13 00:57:27.396755 | orchestrator | Sunday 13 April 2025 00:48:16 +0000 (0:00:01.382) 0:03:51.292 ********** 2025-04-13 00:57:27.396761 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.396768 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.396775 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.396782 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:57:27.396789 | orchestrator | 2025-04-13 00:57:27.396796 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-04-13 00:57:27.396803 | orchestrator | Sunday 13 April 2025 00:48:18 +0000 (0:00:01.120) 0:03:52.412 ********** 2025-04-13 00:57:27.396810 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.396821 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.396828 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.396835 | orchestrator | 2025-04-13 00:57:27.396845 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-04-13 00:57:27.396853 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.396860 | orchestrator | 2025-04-13 00:57:27.396867 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-04-13 00:57:27.396874 | orchestrator | Sunday 13 April 2025 00:48:19 +0000 (0:00:01.149) 0:03:53.562 ********** 2025-04-13 00:57:27.396881 | orchestrator | 2025-04-13 00:57:27.396888 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-04-13 00:57:27.396895 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.396902 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.396908 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.396915 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.396923 | orchestrator | 2025-04-13 00:57:27.396930 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-04-13 00:57:27.396937 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.396943 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.396950 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.396957 | orchestrator | 2025-04-13 00:57:27.396964 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-04-13 00:57:27.396971 | orchestrator | Sunday 13 April 2025 00:48:20 +0000 (0:00:01.223) 0:03:54.786 ********** 2025-04-13 00:57:27.396978 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-13 00:57:27.396988 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-13 00:57:27.396996 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-13 00:57:27.397002 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.397009 | orchestrator | 2025-04-13 00:57:27.397016 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-04-13 00:57:27.397024 | orchestrator | Sunday 13 April 2025 00:48:21 +0000 (0:00:00.942) 0:03:55.728 ********** 2025-04-13 00:57:27.397031 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.397037 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.397044 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.397051 | orchestrator | 2025-04-13 00:57:27.397058 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-04-13 00:57:27.397103 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.397112 | orchestrator | 2025-04-13 00:57:27.397119 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-04-13 00:57:27.397126 | orchestrator | Sunday 13 April 2025 00:48:22 +0000 (0:00:00.792) 0:03:56.521 ********** 2025-04-13 00:57:27.397133 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.397158 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.397165 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.397172 | orchestrator | 2025-04-13 00:57:27.397179 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-04-13 00:57:27.397186 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.397193 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.397199 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.397206 | orchestrator | 2025-04-13 00:57:27.397213 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-04-13 00:57:27.397220 | orchestrator | Sunday 13 April 2025 00:48:22 +0000 (0:00:00.649) 0:03:57.171 ********** 2025-04-13 00:57:27.397227 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.397234 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.397241 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.397248 | orchestrator | 2025-04-13 00:57:27.397255 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-04-13 00:57:27.397267 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.397277 | orchestrator | 2025-04-13 00:57:27.397284 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-04-13 00:57:27.397291 | orchestrator | Sunday 13 April 2025 00:48:23 +0000 (0:00:00.825) 0:03:57.996 ********** 2025-04-13 00:57:27.397298 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.397305 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.397312 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.397319 | orchestrator | 2025-04-13 00:57:27.397326 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-04-13 00:57:27.397333 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.397339 | orchestrator | 2025-04-13 00:57:27.397346 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-04-13 00:57:27.397353 | orchestrator | Sunday 13 April 2025 00:48:24 +0000 (0:00:00.821) 0:03:58.818 ********** 2025-04-13 00:57:27.397360 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.397367 | orchestrator | 2025-04-13 00:57:27.397374 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-04-13 00:57:27.397381 | orchestrator | Sunday 13 April 2025 00:48:24 +0000 (0:00:00.121) 0:03:58.940 ********** 2025-04-13 00:57:27.397388 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.397395 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.397402 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.397409 | orchestrator | 2025-04-13 00:57:27.397416 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-04-13 00:57:27.397423 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.397430 | orchestrator | 2025-04-13 00:57:27.397437 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-04-13 00:57:27.397443 | orchestrator | Sunday 13 April 2025 00:48:25 +0000 (0:00:00.795) 0:03:59.735 ********** 2025-04-13 00:57:27.397450 | orchestrator | 2025-04-13 00:57:27.397457 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-04-13 00:57:27.397464 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.397471 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:57:27.397478 | orchestrator | 2025-04-13 00:57:27.397485 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-04-13 00:57:27.397492 | orchestrator | Sunday 13 April 2025 00:48:26 +0000 (0:00:00.812) 0:04:00.547 ********** 2025-04-13 00:57:27.397499 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.397506 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.397513 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.397520 | orchestrator | 2025-04-13 00:57:27.397527 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-04-13 00:57:27.397533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.397540 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.397547 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.397554 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.397561 | orchestrator | 2025-04-13 00:57:27.397568 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-04-13 00:57:27.397580 | orchestrator | Sunday 13 April 2025 00:48:27 +0000 (0:00:01.173) 0:04:01.721 ********** 2025-04-13 00:57:27.397588 | orchestrator | 2025-04-13 00:57:27.397595 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-04-13 00:57:27.397601 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.397608 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.397615 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.397622 | orchestrator | 2025-04-13 00:57:27.397629 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-04-13 00:57:27.397636 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.397643 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.397655 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.397662 | orchestrator | 2025-04-13 00:57:27.397669 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-04-13 00:57:27.397675 | orchestrator | Sunday 13 April 2025 00:48:28 +0000 (0:00:01.238) 0:04:02.959 ********** 2025-04-13 00:57:27.397682 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-13 00:57:27.397690 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-13 00:57:27.397697 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-13 00:57:27.397704 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.397711 | orchestrator | 2025-04-13 00:57:27.397718 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-04-13 00:57:27.397725 | orchestrator | Sunday 13 April 2025 00:48:29 +0000 (0:00:00.942) 0:04:03.902 ********** 2025-04-13 00:57:27.397732 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.397739 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.397747 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.397755 | orchestrator | 2025-04-13 00:57:27.397804 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-04-13 00:57:27.397815 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.397823 | orchestrator | 2025-04-13 00:57:27.397831 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-04-13 00:57:27.397839 | orchestrator | Sunday 13 April 2025 00:48:30 +0000 (0:00:01.035) 0:04:04.938 ********** 2025-04-13 00:57:27.397847 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.397855 | orchestrator | 2025-04-13 00:57:27.397862 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-04-13 00:57:27.397870 | orchestrator | Sunday 13 April 2025 00:48:31 +0000 (0:00:00.562) 0:04:05.501 ********** 2025-04-13 00:57:27.397878 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.397886 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.397893 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.397901 | orchestrator | 2025-04-13 00:57:27.397909 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-04-13 00:57:27.397916 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.397924 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.397932 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.397940 | orchestrator | 2025-04-13 00:57:27.397948 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-04-13 00:57:27.397956 | orchestrator | Sunday 13 April 2025 00:48:32 +0000 (0:00:01.136) 0:04:06.637 ********** 2025-04-13 00:57:27.397963 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.397971 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.397979 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.397987 | orchestrator | 2025-04-13 00:57:27.397994 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-13 00:57:27.398002 | orchestrator | Sunday 13 April 2025 00:48:33 +0000 (0:00:01.403) 0:04:08.041 ********** 2025-04-13 00:57:27.398010 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.398045 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.398053 | orchestrator | 2025-04-13 00:57:27.398060 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-04-13 00:57:27.398068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.398076 | orchestrator | 2025-04-13 00:57:27.398083 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-13 00:57:27.398091 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.398099 | orchestrator | 2025-04-13 00:57:27.398107 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-04-13 00:57:27.398114 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.398121 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.398136 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.398163 | orchestrator | 2025-04-13 00:57:27.398175 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-04-13 00:57:27.398186 | orchestrator | Sunday 13 April 2025 00:48:34 +0000 (0:00:01.215) 0:04:09.257 ********** 2025-04-13 00:57:27.398197 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.398205 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.398212 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.398219 | orchestrator | 2025-04-13 00:57:27.398226 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-04-13 00:57:27.398233 | orchestrator | Sunday 13 April 2025 00:48:36 +0000 (0:00:01.074) 0:04:10.331 ********** 2025-04-13 00:57:27.398240 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.398247 | orchestrator | 2025-04-13 00:57:27.398254 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-04-13 00:57:27.398261 | orchestrator | Sunday 13 April 2025 00:48:36 +0000 (0:00:00.598) 0:04:10.930 ********** 2025-04-13 00:57:27.398268 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.398275 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.398282 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.398289 | orchestrator | 2025-04-13 00:57:27.398295 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-04-13 00:57:27.398302 | orchestrator | Sunday 13 April 2025 00:48:37 +0000 (0:00:00.672) 0:04:11.602 ********** 2025-04-13 00:57:27.398309 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.398316 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.398323 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.398330 | orchestrator | 2025-04-13 00:57:27.398337 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-04-13 00:57:27.398344 | orchestrator | Sunday 13 April 2025 00:48:38 +0000 (0:00:01.367) 0:04:12.970 ********** 2025-04-13 00:57:27.398351 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.398358 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.398365 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.398372 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.398379 | orchestrator | 2025-04-13 00:57:27.398386 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-04-13 00:57:27.398393 | orchestrator | Sunday 13 April 2025 00:48:39 +0000 (0:00:00.715) 0:04:13.685 ********** 2025-04-13 00:57:27.398400 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.398407 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.398414 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.398421 | orchestrator | 2025-04-13 00:57:27.398432 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-04-13 00:57:27.398439 | orchestrator | Sunday 13 April 2025 00:48:39 +0000 (0:00:00.466) 0:04:14.152 ********** 2025-04-13 00:57:27.398446 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.398457 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.398464 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.398471 | orchestrator | 2025-04-13 00:57:27.398478 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-04-13 00:57:27.398485 | orchestrator | Sunday 13 April 2025 00:48:40 +0000 (0:00:00.591) 0:04:14.743 ********** 2025-04-13 00:57:27.398492 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.398499 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.398552 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.398562 | orchestrator | 2025-04-13 00:57:27.398569 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-04-13 00:57:27.398576 | orchestrator | Sunday 13 April 2025 00:48:40 +0000 (0:00:00.429) 0:04:15.172 ********** 2025-04-13 00:57:27.398583 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.398590 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.398602 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.398609 | orchestrator | 2025-04-13 00:57:27.398616 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-13 00:57:27.398623 | orchestrator | Sunday 13 April 2025 00:48:41 +0000 (0:00:00.370) 0:04:15.543 ********** 2025-04-13 00:57:27.398630 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.398637 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.398644 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.398651 | orchestrator | 2025-04-13 00:57:27.398658 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-04-13 00:57:27.398665 | orchestrator | 2025-04-13 00:57:27.398672 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-13 00:57:27.398679 | orchestrator | Sunday 13 April 2025 00:48:43 +0000 (0:00:02.413) 0:04:17.957 ********** 2025-04-13 00:57:27.398686 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:57:27.398694 | orchestrator | 2025-04-13 00:57:27.398701 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-13 00:57:27.398707 | orchestrator | Sunday 13 April 2025 00:48:44 +0000 (0:00:00.624) 0:04:18.582 ********** 2025-04-13 00:57:27.398714 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.398721 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.398728 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.398735 | orchestrator | 2025-04-13 00:57:27.398742 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-13 00:57:27.398749 | orchestrator | Sunday 13 April 2025 00:48:45 +0000 (0:00:00.753) 0:04:19.335 ********** 2025-04-13 00:57:27.398755 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.398762 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.398769 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.398776 | orchestrator | 2025-04-13 00:57:27.398783 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-13 00:57:27.398790 | orchestrator | Sunday 13 April 2025 00:48:45 +0000 (0:00:00.575) 0:04:19.910 ********** 2025-04-13 00:57:27.398797 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.398804 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.398811 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.398818 | orchestrator | 2025-04-13 00:57:27.398825 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-13 00:57:27.398832 | orchestrator | Sunday 13 April 2025 00:48:45 +0000 (0:00:00.362) 0:04:20.273 ********** 2025-04-13 00:57:27.398839 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.398846 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.398853 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.398860 | orchestrator | 2025-04-13 00:57:27.398867 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-13 00:57:27.398874 | orchestrator | Sunday 13 April 2025 00:48:46 +0000 (0:00:00.353) 0:04:20.626 ********** 2025-04-13 00:57:27.398881 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.398888 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.398895 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.398902 | orchestrator | 2025-04-13 00:57:27.398909 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-13 00:57:27.398916 | orchestrator | Sunday 13 April 2025 00:48:47 +0000 (0:00:00.733) 0:04:21.359 ********** 2025-04-13 00:57:27.398923 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.398930 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.398937 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.398943 | orchestrator | 2025-04-13 00:57:27.398951 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-13 00:57:27.398957 | orchestrator | Sunday 13 April 2025 00:48:47 +0000 (0:00:00.578) 0:04:21.938 ********** 2025-04-13 00:57:27.398964 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.398971 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.398982 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.398989 | orchestrator | 2025-04-13 00:57:27.398996 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-13 00:57:27.399003 | orchestrator | Sunday 13 April 2025 00:48:47 +0000 (0:00:00.345) 0:04:22.284 ********** 2025-04-13 00:57:27.399010 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.399017 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.399024 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.399031 | orchestrator | 2025-04-13 00:57:27.399038 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-13 00:57:27.399045 | orchestrator | Sunday 13 April 2025 00:48:48 +0000 (0:00:00.351) 0:04:22.636 ********** 2025-04-13 00:57:27.399052 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.399059 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.399066 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.399073 | orchestrator | 2025-04-13 00:57:27.399080 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-13 00:57:27.399087 | orchestrator | Sunday 13 April 2025 00:48:48 +0000 (0:00:00.351) 0:04:22.988 ********** 2025-04-13 00:57:27.399094 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.399101 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.399108 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.399115 | orchestrator | 2025-04-13 00:57:27.399122 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-13 00:57:27.399132 | orchestrator | Sunday 13 April 2025 00:48:49 +0000 (0:00:00.593) 0:04:23.581 ********** 2025-04-13 00:57:27.399154 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.399162 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.399169 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.399176 | orchestrator | 2025-04-13 00:57:27.399183 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-13 00:57:27.399233 | orchestrator | Sunday 13 April 2025 00:48:49 +0000 (0:00:00.706) 0:04:24.288 ********** 2025-04-13 00:57:27.399244 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.399251 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.399259 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.399266 | orchestrator | 2025-04-13 00:57:27.399274 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-13 00:57:27.399282 | orchestrator | Sunday 13 April 2025 00:48:50 +0000 (0:00:00.363) 0:04:24.651 ********** 2025-04-13 00:57:27.399290 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.399298 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.399305 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.399317 | orchestrator | 2025-04-13 00:57:27.399325 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-13 00:57:27.399332 | orchestrator | Sunday 13 April 2025 00:48:50 +0000 (0:00:00.365) 0:04:25.016 ********** 2025-04-13 00:57:27.399340 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.399347 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.399355 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.399363 | orchestrator | 2025-04-13 00:57:27.399370 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-13 00:57:27.399378 | orchestrator | Sunday 13 April 2025 00:48:51 +0000 (0:00:00.655) 0:04:25.672 ********** 2025-04-13 00:57:27.399386 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.399394 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.399401 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.399409 | orchestrator | 2025-04-13 00:57:27.399417 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-13 00:57:27.399425 | orchestrator | Sunday 13 April 2025 00:48:51 +0000 (0:00:00.344) 0:04:26.016 ********** 2025-04-13 00:57:27.399432 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.399440 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.399448 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.399461 | orchestrator | 2025-04-13 00:57:27.399469 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-13 00:57:27.399477 | orchestrator | Sunday 13 April 2025 00:48:52 +0000 (0:00:00.342) 0:04:26.359 ********** 2025-04-13 00:57:27.399484 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.399492 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.399500 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.399508 | orchestrator | 2025-04-13 00:57:27.399515 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-13 00:57:27.399523 | orchestrator | Sunday 13 April 2025 00:48:52 +0000 (0:00:00.338) 0:04:26.697 ********** 2025-04-13 00:57:27.399531 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.399539 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.399547 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.399554 | orchestrator | 2025-04-13 00:57:27.399561 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-13 00:57:27.399567 | orchestrator | Sunday 13 April 2025 00:48:53 +0000 (0:00:00.634) 0:04:27.332 ********** 2025-04-13 00:57:27.399574 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.399581 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.399588 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.399595 | orchestrator | 2025-04-13 00:57:27.399602 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-13 00:57:27.399609 | orchestrator | Sunday 13 April 2025 00:48:53 +0000 (0:00:00.372) 0:04:27.705 ********** 2025-04-13 00:57:27.399616 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.399623 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.399629 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.399636 | orchestrator | 2025-04-13 00:57:27.399643 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-13 00:57:27.399650 | orchestrator | Sunday 13 April 2025 00:48:53 +0000 (0:00:00.361) 0:04:28.066 ********** 2025-04-13 00:57:27.399657 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.399664 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.399671 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.399678 | orchestrator | 2025-04-13 00:57:27.399685 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-13 00:57:27.399692 | orchestrator | Sunday 13 April 2025 00:48:54 +0000 (0:00:00.572) 0:04:28.639 ********** 2025-04-13 00:57:27.399699 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.399705 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.399712 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.399719 | orchestrator | 2025-04-13 00:57:27.399726 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-13 00:57:27.399733 | orchestrator | Sunday 13 April 2025 00:48:54 +0000 (0:00:00.349) 0:04:28.988 ********** 2025-04-13 00:57:27.399740 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.399747 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.399753 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.399760 | orchestrator | 2025-04-13 00:57:27.399767 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-13 00:57:27.399774 | orchestrator | Sunday 13 April 2025 00:48:55 +0000 (0:00:00.382) 0:04:29.370 ********** 2025-04-13 00:57:27.399781 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.399788 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.399795 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.399801 | orchestrator | 2025-04-13 00:57:27.399808 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-13 00:57:27.399815 | orchestrator | Sunday 13 April 2025 00:48:55 +0000 (0:00:00.360) 0:04:29.731 ********** 2025-04-13 00:57:27.399822 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.399829 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.399836 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.399843 | orchestrator | 2025-04-13 00:57:27.399854 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-13 00:57:27.399861 | orchestrator | Sunday 13 April 2025 00:48:56 +0000 (0:00:00.653) 0:04:30.385 ********** 2025-04-13 00:57:27.399868 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.399875 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.399881 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.399888 | orchestrator | 2025-04-13 00:57:27.399895 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-13 00:57:27.399942 | orchestrator | Sunday 13 April 2025 00:48:56 +0000 (0:00:00.378) 0:04:30.763 ********** 2025-04-13 00:57:27.399952 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.399959 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.399965 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.399972 | orchestrator | 2025-04-13 00:57:27.399979 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-13 00:57:27.399986 | orchestrator | Sunday 13 April 2025 00:48:56 +0000 (0:00:00.386) 0:04:31.150 ********** 2025-04-13 00:57:27.399993 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.400000 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.400007 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.400014 | orchestrator | 2025-04-13 00:57:27.400021 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-13 00:57:27.400028 | orchestrator | Sunday 13 April 2025 00:48:57 +0000 (0:00:00.438) 0:04:31.588 ********** 2025-04-13 00:57:27.400035 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.400041 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.400048 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.400055 | orchestrator | 2025-04-13 00:57:27.400062 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-13 00:57:27.400069 | orchestrator | Sunday 13 April 2025 00:48:57 +0000 (0:00:00.655) 0:04:32.244 ********** 2025-04-13 00:57:27.400076 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.400083 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.400089 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.400100 | orchestrator | 2025-04-13 00:57:27.400107 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-13 00:57:27.400114 | orchestrator | Sunday 13 April 2025 00:48:58 +0000 (0:00:00.344) 0:04:32.589 ********** 2025-04-13 00:57:27.400121 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.400128 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.400134 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.400177 | orchestrator | 2025-04-13 00:57:27.400184 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-13 00:57:27.400191 | orchestrator | Sunday 13 April 2025 00:48:58 +0000 (0:00:00.360) 0:04:32.949 ********** 2025-04-13 00:57:27.400198 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.400205 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.400212 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.400219 | orchestrator | 2025-04-13 00:57:27.400226 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-13 00:57:27.400233 | orchestrator | Sunday 13 April 2025 00:48:59 +0000 (0:00:00.376) 0:04:33.325 ********** 2025-04-13 00:57:27.400240 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-13 00:57:27.400247 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-13 00:57:27.400253 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.400260 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-13 00:57:27.400267 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-13 00:57:27.400274 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.400281 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-13 00:57:27.400288 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-13 00:57:27.400300 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.400307 | orchestrator | 2025-04-13 00:57:27.400314 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-13 00:57:27.400320 | orchestrator | Sunday 13 April 2025 00:48:59 +0000 (0:00:00.660) 0:04:33.986 ********** 2025-04-13 00:57:27.400327 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-13 00:57:27.400334 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-13 00:57:27.400341 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.400348 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-13 00:57:27.400355 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-13 00:57:27.400362 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.400368 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-13 00:57:27.400375 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-13 00:57:27.400382 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.400389 | orchestrator | 2025-04-13 00:57:27.400396 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-13 00:57:27.400403 | orchestrator | Sunday 13 April 2025 00:49:00 +0000 (0:00:00.385) 0:04:34.371 ********** 2025-04-13 00:57:27.400410 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.400417 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.400423 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.400430 | orchestrator | 2025-04-13 00:57:27.400437 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-13 00:57:27.400444 | orchestrator | Sunday 13 April 2025 00:49:00 +0000 (0:00:00.332) 0:04:34.703 ********** 2025-04-13 00:57:27.400451 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.400458 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.400464 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.400471 | orchestrator | 2025-04-13 00:57:27.400478 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-13 00:57:27.400505 | orchestrator | Sunday 13 April 2025 00:49:00 +0000 (0:00:00.358) 0:04:35.062 ********** 2025-04-13 00:57:27.400513 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.400520 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.400527 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.400534 | orchestrator | 2025-04-13 00:57:27.400541 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-13 00:57:27.400548 | orchestrator | Sunday 13 April 2025 00:49:01 +0000 (0:00:00.613) 0:04:35.676 ********** 2025-04-13 00:57:27.400555 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.400562 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.400569 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.400576 | orchestrator | 2025-04-13 00:57:27.400628 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-13 00:57:27.400638 | orchestrator | Sunday 13 April 2025 00:49:01 +0000 (0:00:00.382) 0:04:36.058 ********** 2025-04-13 00:57:27.400646 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.400654 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.400661 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.400669 | orchestrator | 2025-04-13 00:57:27.400676 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-13 00:57:27.400683 | orchestrator | Sunday 13 April 2025 00:49:02 +0000 (0:00:00.375) 0:04:36.434 ********** 2025-04-13 00:57:27.400690 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.400697 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.400704 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.400710 | orchestrator | 2025-04-13 00:57:27.400717 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-13 00:57:27.400724 | orchestrator | Sunday 13 April 2025 00:49:02 +0000 (0:00:00.342) 0:04:36.776 ********** 2025-04-13 00:57:27.400735 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-13 00:57:27.400743 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-13 00:57:27.400749 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-13 00:57:27.400756 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.400762 | orchestrator | 2025-04-13 00:57:27.400770 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-13 00:57:27.400776 | orchestrator | Sunday 13 April 2025 00:49:03 +0000 (0:00:00.816) 0:04:37.593 ********** 2025-04-13 00:57:27.400784 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-13 00:57:27.400790 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-13 00:57:27.400797 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-13 00:57:27.400804 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.400810 | orchestrator | 2025-04-13 00:57:27.400817 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-13 00:57:27.400824 | orchestrator | Sunday 13 April 2025 00:49:04 +0000 (0:00:00.836) 0:04:38.430 ********** 2025-04-13 00:57:27.400831 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-13 00:57:27.400838 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-13 00:57:27.400845 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-13 00:57:27.400852 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.400859 | orchestrator | 2025-04-13 00:57:27.400866 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-13 00:57:27.400872 | orchestrator | Sunday 13 April 2025 00:49:04 +0000 (0:00:00.456) 0:04:38.886 ********** 2025-04-13 00:57:27.400879 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.400886 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.400893 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.400899 | orchestrator | 2025-04-13 00:57:27.400906 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-13 00:57:27.400916 | orchestrator | Sunday 13 April 2025 00:49:05 +0000 (0:00:00.460) 0:04:39.346 ********** 2025-04-13 00:57:27.400923 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-13 00:57:27.400930 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.400937 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-13 00:57:27.400943 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.400949 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-13 00:57:27.400955 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.400961 | orchestrator | 2025-04-13 00:57:27.400967 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-13 00:57:27.400973 | orchestrator | Sunday 13 April 2025 00:49:05 +0000 (0:00:00.581) 0:04:39.927 ********** 2025-04-13 00:57:27.400979 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.400985 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.400991 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.400997 | orchestrator | 2025-04-13 00:57:27.401003 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-13 00:57:27.401009 | orchestrator | Sunday 13 April 2025 00:49:06 +0000 (0:00:00.413) 0:04:40.341 ********** 2025-04-13 00:57:27.401015 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.401021 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.401027 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.401033 | orchestrator | 2025-04-13 00:57:27.401039 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-13 00:57:27.401045 | orchestrator | Sunday 13 April 2025 00:49:06 +0000 (0:00:00.648) 0:04:40.990 ********** 2025-04-13 00:57:27.401051 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-13 00:57:27.401058 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.401064 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-13 00:57:27.401070 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.401081 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-13 00:57:27.401087 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.401093 | orchestrator | 2025-04-13 00:57:27.401099 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-13 00:57:27.401105 | orchestrator | Sunday 13 April 2025 00:49:07 +0000 (0:00:00.638) 0:04:41.629 ********** 2025-04-13 00:57:27.401111 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.401117 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.401123 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.401132 | orchestrator | 2025-04-13 00:57:27.401152 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-13 00:57:27.401159 | orchestrator | Sunday 13 April 2025 00:49:07 +0000 (0:00:00.531) 0:04:42.160 ********** 2025-04-13 00:57:27.401165 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-13 00:57:27.401171 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-13 00:57:27.401177 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-13 00:57:27.401184 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-13 00:57:27.401205 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-13 00:57:27.401213 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-13 00:57:27.401219 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.401225 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.401231 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-13 00:57:27.401241 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-13 00:57:27.401247 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-13 00:57:27.401253 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.401259 | orchestrator | 2025-04-13 00:57:27.401265 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-13 00:57:27.401271 | orchestrator | Sunday 13 April 2025 00:49:08 +0000 (0:00:01.035) 0:04:43.196 ********** 2025-04-13 00:57:27.401277 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.401283 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.401289 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.401295 | orchestrator | 2025-04-13 00:57:27.401302 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-13 00:57:27.401308 | orchestrator | Sunday 13 April 2025 00:49:09 +0000 (0:00:00.607) 0:04:43.803 ********** 2025-04-13 00:57:27.401314 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.401320 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.401326 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.401332 | orchestrator | 2025-04-13 00:57:27.401338 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-13 00:57:27.401344 | orchestrator | Sunday 13 April 2025 00:49:10 +0000 (0:00:00.968) 0:04:44.772 ********** 2025-04-13 00:57:27.401350 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.401357 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.401363 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.401369 | orchestrator | 2025-04-13 00:57:27.401375 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-13 00:57:27.401381 | orchestrator | Sunday 13 April 2025 00:49:11 +0000 (0:00:00.787) 0:04:45.559 ********** 2025-04-13 00:57:27.401387 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.401393 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.401400 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.401406 | orchestrator | 2025-04-13 00:57:27.401412 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-04-13 00:57:27.401418 | orchestrator | Sunday 13 April 2025 00:49:12 +0000 (0:00:00.921) 0:04:46.481 ********** 2025-04-13 00:57:27.401424 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.401430 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.401440 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.401446 | orchestrator | 2025-04-13 00:57:27.401453 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-04-13 00:57:27.401459 | orchestrator | Sunday 13 April 2025 00:49:12 +0000 (0:00:00.348) 0:04:46.830 ********** 2025-04-13 00:57:27.401465 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:57:27.401471 | orchestrator | 2025-04-13 00:57:27.401477 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-04-13 00:57:27.401483 | orchestrator | Sunday 13 April 2025 00:49:13 +0000 (0:00:00.909) 0:04:47.739 ********** 2025-04-13 00:57:27.401489 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.401495 | orchestrator | 2025-04-13 00:57:27.401502 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-04-13 00:57:27.401508 | orchestrator | Sunday 13 April 2025 00:49:13 +0000 (0:00:00.208) 0:04:47.948 ********** 2025-04-13 00:57:27.401514 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-04-13 00:57:27.401520 | orchestrator | 2025-04-13 00:57:27.401526 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-04-13 00:57:27.401532 | orchestrator | Sunday 13 April 2025 00:49:14 +0000 (0:00:00.862) 0:04:48.811 ********** 2025-04-13 00:57:27.401538 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.401544 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.401550 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.401556 | orchestrator | 2025-04-13 00:57:27.401563 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-04-13 00:57:27.401569 | orchestrator | Sunday 13 April 2025 00:49:14 +0000 (0:00:00.402) 0:04:49.214 ********** 2025-04-13 00:57:27.401575 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.401581 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.401587 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.401593 | orchestrator | 2025-04-13 00:57:27.401599 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-04-13 00:57:27.401608 | orchestrator | Sunday 13 April 2025 00:49:15 +0000 (0:00:00.455) 0:04:49.669 ********** 2025-04-13 00:57:27.401614 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.401621 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.401627 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.401633 | orchestrator | 2025-04-13 00:57:27.401639 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-04-13 00:57:27.401645 | orchestrator | Sunday 13 April 2025 00:49:16 +0000 (0:00:01.293) 0:04:50.962 ********** 2025-04-13 00:57:27.401651 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.401657 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.401663 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.401669 | orchestrator | 2025-04-13 00:57:27.401675 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-04-13 00:57:27.401681 | orchestrator | Sunday 13 April 2025 00:49:17 +0000 (0:00:00.843) 0:04:51.805 ********** 2025-04-13 00:57:27.401687 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.401694 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.401700 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.401706 | orchestrator | 2025-04-13 00:57:27.401712 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-04-13 00:57:27.401718 | orchestrator | Sunday 13 April 2025 00:49:18 +0000 (0:00:00.713) 0:04:52.518 ********** 2025-04-13 00:57:27.401724 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.401730 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.401736 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.401743 | orchestrator | 2025-04-13 00:57:27.401762 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-04-13 00:57:27.401769 | orchestrator | Sunday 13 April 2025 00:49:18 +0000 (0:00:00.698) 0:04:53.217 ********** 2025-04-13 00:57:27.401775 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.401781 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.401791 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.401797 | orchestrator | 2025-04-13 00:57:27.401803 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-04-13 00:57:27.401810 | orchestrator | Sunday 13 April 2025 00:49:19 +0000 (0:00:00.600) 0:04:53.817 ********** 2025-04-13 00:57:27.401816 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.401822 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.401828 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.401834 | orchestrator | 2025-04-13 00:57:27.401840 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-04-13 00:57:27.401846 | orchestrator | Sunday 13 April 2025 00:49:19 +0000 (0:00:00.384) 0:04:54.202 ********** 2025-04-13 00:57:27.401852 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.401859 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.401865 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.401871 | orchestrator | 2025-04-13 00:57:27.401877 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-04-13 00:57:27.401883 | orchestrator | Sunday 13 April 2025 00:49:20 +0000 (0:00:00.400) 0:04:54.603 ********** 2025-04-13 00:57:27.401889 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.401895 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.401901 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.401907 | orchestrator | 2025-04-13 00:57:27.401914 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-04-13 00:57:27.401923 | orchestrator | Sunday 13 April 2025 00:49:20 +0000 (0:00:00.288) 0:04:54.892 ********** 2025-04-13 00:57:27.401930 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.401936 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.401942 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.401951 | orchestrator | 2025-04-13 00:57:27.401958 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-04-13 00:57:27.401964 | orchestrator | Sunday 13 April 2025 00:49:22 +0000 (0:00:01.444) 0:04:56.336 ********** 2025-04-13 00:57:27.401970 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.401977 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.401983 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.401989 | orchestrator | 2025-04-13 00:57:27.401995 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-04-13 00:57:27.402001 | orchestrator | Sunday 13 April 2025 00:49:22 +0000 (0:00:00.386) 0:04:56.722 ********** 2025-04-13 00:57:27.402008 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:57:27.402036 | orchestrator | 2025-04-13 00:57:27.402043 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-04-13 00:57:27.402050 | orchestrator | Sunday 13 April 2025 00:49:23 +0000 (0:00:00.683) 0:04:57.405 ********** 2025-04-13 00:57:27.402056 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.402062 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.402068 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.402074 | orchestrator | 2025-04-13 00:57:27.402081 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-04-13 00:57:27.402087 | orchestrator | Sunday 13 April 2025 00:49:23 +0000 (0:00:00.344) 0:04:57.750 ********** 2025-04-13 00:57:27.402093 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.402099 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.402105 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.402111 | orchestrator | 2025-04-13 00:57:27.402117 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-04-13 00:57:27.402123 | orchestrator | Sunday 13 April 2025 00:49:23 +0000 (0:00:00.333) 0:04:58.083 ********** 2025-04-13 00:57:27.402129 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:57:27.402136 | orchestrator | 2025-04-13 00:57:27.402159 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-04-13 00:57:27.402170 | orchestrator | Sunday 13 April 2025 00:49:24 +0000 (0:00:00.805) 0:04:58.889 ********** 2025-04-13 00:57:27.402176 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.402182 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.402188 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.402194 | orchestrator | 2025-04-13 00:57:27.402200 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-04-13 00:57:27.402206 | orchestrator | Sunday 13 April 2025 00:49:25 +0000 (0:00:01.218) 0:05:00.107 ********** 2025-04-13 00:57:27.402212 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.402218 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.402224 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.402230 | orchestrator | 2025-04-13 00:57:27.402236 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-04-13 00:57:27.402245 | orchestrator | Sunday 13 April 2025 00:49:27 +0000 (0:00:01.235) 0:05:01.343 ********** 2025-04-13 00:57:27.402251 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.402257 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.402263 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.402269 | orchestrator | 2025-04-13 00:57:27.402275 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-04-13 00:57:27.402281 | orchestrator | Sunday 13 April 2025 00:49:29 +0000 (0:00:02.080) 0:05:03.424 ********** 2025-04-13 00:57:27.402287 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.402293 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.402300 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.402306 | orchestrator | 2025-04-13 00:57:27.402312 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-04-13 00:57:27.402318 | orchestrator | Sunday 13 April 2025 00:49:31 +0000 (0:00:01.948) 0:05:05.372 ********** 2025-04-13 00:57:27.402324 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:57:27.402330 | orchestrator | 2025-04-13 00:57:27.402353 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-04-13 00:57:27.402360 | orchestrator | Sunday 13 April 2025 00:49:31 +0000 (0:00:00.560) 0:05:05.933 ********** 2025-04-13 00:57:27.402366 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-04-13 00:57:27.402372 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.402378 | orchestrator | 2025-04-13 00:57:27.402384 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-04-13 00:57:27.402390 | orchestrator | Sunday 13 April 2025 00:49:53 +0000 (0:00:21.491) 0:05:27.424 ********** 2025-04-13 00:57:27.402397 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.402403 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.402409 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.402415 | orchestrator | 2025-04-13 00:57:27.402421 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-04-13 00:57:27.402427 | orchestrator | Sunday 13 April 2025 00:50:00 +0000 (0:00:07.496) 0:05:34.921 ********** 2025-04-13 00:57:27.402433 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.402439 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.402445 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.402451 | orchestrator | 2025-04-13 00:57:27.402458 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-13 00:57:27.402464 | orchestrator | Sunday 13 April 2025 00:50:01 +0000 (0:00:01.253) 0:05:36.174 ********** 2025-04-13 00:57:27.402470 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.402476 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.402482 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.402488 | orchestrator | 2025-04-13 00:57:27.402494 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-04-13 00:57:27.402500 | orchestrator | Sunday 13 April 2025 00:50:02 +0000 (0:00:00.678) 0:05:36.853 ********** 2025-04-13 00:57:27.402510 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:57:27.402517 | orchestrator | 2025-04-13 00:57:27.402523 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-04-13 00:57:27.402529 | orchestrator | Sunday 13 April 2025 00:50:03 +0000 (0:00:00.820) 0:05:37.674 ********** 2025-04-13 00:57:27.402535 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.402541 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.402547 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.402553 | orchestrator | 2025-04-13 00:57:27.402559 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-04-13 00:57:27.402565 | orchestrator | Sunday 13 April 2025 00:50:03 +0000 (0:00:00.374) 0:05:38.048 ********** 2025-04-13 00:57:27.402571 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.402577 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.402583 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.402589 | orchestrator | 2025-04-13 00:57:27.402596 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-04-13 00:57:27.402602 | orchestrator | Sunday 13 April 2025 00:50:04 +0000 (0:00:01.174) 0:05:39.223 ********** 2025-04-13 00:57:27.402608 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-13 00:57:27.402614 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-13 00:57:27.402620 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-13 00:57:27.402626 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.402632 | orchestrator | 2025-04-13 00:57:27.402638 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-04-13 00:57:27.402644 | orchestrator | Sunday 13 April 2025 00:50:06 +0000 (0:00:01.322) 0:05:40.546 ********** 2025-04-13 00:57:27.402650 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.402656 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.402662 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.402669 | orchestrator | 2025-04-13 00:57:27.402675 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-13 00:57:27.402681 | orchestrator | Sunday 13 April 2025 00:50:06 +0000 (0:00:00.386) 0:05:40.932 ********** 2025-04-13 00:57:27.402687 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.402693 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.402699 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.402705 | orchestrator | 2025-04-13 00:57:27.402711 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-04-13 00:57:27.402717 | orchestrator | 2025-04-13 00:57:27.402723 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-13 00:57:27.402729 | orchestrator | Sunday 13 April 2025 00:50:08 +0000 (0:00:02.196) 0:05:43.129 ********** 2025-04-13 00:57:27.402735 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:57:27.402741 | orchestrator | 2025-04-13 00:57:27.402747 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-13 00:57:27.402753 | orchestrator | Sunday 13 April 2025 00:50:09 +0000 (0:00:00.799) 0:05:43.928 ********** 2025-04-13 00:57:27.402759 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.402766 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.402772 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.402778 | orchestrator | 2025-04-13 00:57:27.402784 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-13 00:57:27.402790 | orchestrator | Sunday 13 April 2025 00:50:10 +0000 (0:00:00.764) 0:05:44.693 ********** 2025-04-13 00:57:27.402796 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.402802 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.402808 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.402817 | orchestrator | 2025-04-13 00:57:27.402828 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-13 00:57:27.402838 | orchestrator | Sunday 13 April 2025 00:50:10 +0000 (0:00:00.345) 0:05:45.038 ********** 2025-04-13 00:57:27.402844 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.402850 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.402856 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.402862 | orchestrator | 2025-04-13 00:57:27.402882 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-13 00:57:27.402889 | orchestrator | Sunday 13 April 2025 00:50:11 +0000 (0:00:00.603) 0:05:45.642 ********** 2025-04-13 00:57:27.402896 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.402902 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.402908 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.402914 | orchestrator | 2025-04-13 00:57:27.402920 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-13 00:57:27.402926 | orchestrator | Sunday 13 April 2025 00:50:11 +0000 (0:00:00.350) 0:05:45.992 ********** 2025-04-13 00:57:27.402932 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.402938 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.402944 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.402950 | orchestrator | 2025-04-13 00:57:27.402957 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-13 00:57:27.402963 | orchestrator | Sunday 13 April 2025 00:50:12 +0000 (0:00:00.732) 0:05:46.725 ********** 2025-04-13 00:57:27.402969 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.402975 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.402981 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.402987 | orchestrator | 2025-04-13 00:57:27.402993 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-13 00:57:27.402999 | orchestrator | Sunday 13 April 2025 00:50:12 +0000 (0:00:00.363) 0:05:47.089 ********** 2025-04-13 00:57:27.403005 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.403011 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.403017 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.403023 | orchestrator | 2025-04-13 00:57:27.403029 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-13 00:57:27.403035 | orchestrator | Sunday 13 April 2025 00:50:13 +0000 (0:00:00.623) 0:05:47.712 ********** 2025-04-13 00:57:27.403041 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.403047 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.403053 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.403059 | orchestrator | 2025-04-13 00:57:27.403065 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-13 00:57:27.403071 | orchestrator | Sunday 13 April 2025 00:50:13 +0000 (0:00:00.328) 0:05:48.040 ********** 2025-04-13 00:57:27.403078 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.403084 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.403090 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.403096 | orchestrator | 2025-04-13 00:57:27.403102 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-13 00:57:27.403108 | orchestrator | Sunday 13 April 2025 00:50:14 +0000 (0:00:00.316) 0:05:48.356 ********** 2025-04-13 00:57:27.403114 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.403120 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.403126 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.403132 | orchestrator | 2025-04-13 00:57:27.403151 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-13 00:57:27.403162 | orchestrator | Sunday 13 April 2025 00:50:14 +0000 (0:00:00.367) 0:05:48.724 ********** 2025-04-13 00:57:27.403171 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.403181 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.403190 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.403196 | orchestrator | 2025-04-13 00:57:27.403202 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-13 00:57:27.403208 | orchestrator | Sunday 13 April 2025 00:50:15 +0000 (0:00:01.044) 0:05:49.769 ********** 2025-04-13 00:57:27.403218 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.403224 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.403231 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.403237 | orchestrator | 2025-04-13 00:57:27.403243 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-13 00:57:27.403249 | orchestrator | Sunday 13 April 2025 00:50:15 +0000 (0:00:00.329) 0:05:50.098 ********** 2025-04-13 00:57:27.403255 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.403261 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.403267 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.403273 | orchestrator | 2025-04-13 00:57:27.403279 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-13 00:57:27.403285 | orchestrator | Sunday 13 April 2025 00:50:16 +0000 (0:00:00.410) 0:05:50.509 ********** 2025-04-13 00:57:27.403292 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.403298 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.403304 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.403310 | orchestrator | 2025-04-13 00:57:27.403316 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-13 00:57:27.403322 | orchestrator | Sunday 13 April 2025 00:50:16 +0000 (0:00:00.325) 0:05:50.834 ********** 2025-04-13 00:57:27.403328 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.403334 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.403340 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.403346 | orchestrator | 2025-04-13 00:57:27.403352 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-13 00:57:27.403358 | orchestrator | Sunday 13 April 2025 00:50:17 +0000 (0:00:00.665) 0:05:51.499 ********** 2025-04-13 00:57:27.403364 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.403370 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.403376 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.403382 | orchestrator | 2025-04-13 00:57:27.403389 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-13 00:57:27.403395 | orchestrator | Sunday 13 April 2025 00:50:17 +0000 (0:00:00.394) 0:05:51.893 ********** 2025-04-13 00:57:27.403401 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.403407 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.403413 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.403419 | orchestrator | 2025-04-13 00:57:27.403425 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-13 00:57:27.403434 | orchestrator | Sunday 13 April 2025 00:50:17 +0000 (0:00:00.357) 0:05:52.251 ********** 2025-04-13 00:57:27.403440 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.403446 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.403452 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.403458 | orchestrator | 2025-04-13 00:57:27.403480 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-13 00:57:27.403487 | orchestrator | Sunday 13 April 2025 00:50:18 +0000 (0:00:00.641) 0:05:52.893 ********** 2025-04-13 00:57:27.403493 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.403499 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.403506 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.403515 | orchestrator | 2025-04-13 00:57:27.403521 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-13 00:57:27.403527 | orchestrator | Sunday 13 April 2025 00:50:18 +0000 (0:00:00.396) 0:05:53.290 ********** 2025-04-13 00:57:27.403533 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.403539 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.403545 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.403551 | orchestrator | 2025-04-13 00:57:27.403557 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-13 00:57:27.403564 | orchestrator | Sunday 13 April 2025 00:50:19 +0000 (0:00:00.415) 0:05:53.705 ********** 2025-04-13 00:57:27.403570 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.403579 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.403585 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.403591 | orchestrator | 2025-04-13 00:57:27.403597 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-13 00:57:27.403603 | orchestrator | Sunday 13 April 2025 00:50:19 +0000 (0:00:00.389) 0:05:54.094 ********** 2025-04-13 00:57:27.403609 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.403615 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.403622 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.403628 | orchestrator | 2025-04-13 00:57:27.403634 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-13 00:57:27.403640 | orchestrator | Sunday 13 April 2025 00:50:20 +0000 (0:00:00.626) 0:05:54.721 ********** 2025-04-13 00:57:27.403646 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.403652 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.403658 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.403664 | orchestrator | 2025-04-13 00:57:27.403670 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-13 00:57:27.403676 | orchestrator | Sunday 13 April 2025 00:50:20 +0000 (0:00:00.354) 0:05:55.076 ********** 2025-04-13 00:57:27.403682 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.403688 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.403694 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.403700 | orchestrator | 2025-04-13 00:57:27.403707 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-13 00:57:27.403713 | orchestrator | Sunday 13 April 2025 00:50:21 +0000 (0:00:00.333) 0:05:55.409 ********** 2025-04-13 00:57:27.403718 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.403725 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.403731 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.403737 | orchestrator | 2025-04-13 00:57:27.403743 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-13 00:57:27.403749 | orchestrator | Sunday 13 April 2025 00:50:21 +0000 (0:00:00.352) 0:05:55.761 ********** 2025-04-13 00:57:27.403755 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.403761 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.403767 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.403773 | orchestrator | 2025-04-13 00:57:27.403779 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-13 00:57:27.403785 | orchestrator | Sunday 13 April 2025 00:50:22 +0000 (0:00:00.616) 0:05:56.378 ********** 2025-04-13 00:57:27.403791 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.403797 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.403803 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.403809 | orchestrator | 2025-04-13 00:57:27.403815 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-13 00:57:27.403822 | orchestrator | Sunday 13 April 2025 00:50:22 +0000 (0:00:00.486) 0:05:56.864 ********** 2025-04-13 00:57:27.403828 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.403834 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.403840 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.403846 | orchestrator | 2025-04-13 00:57:27.403852 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-13 00:57:27.403858 | orchestrator | Sunday 13 April 2025 00:50:22 +0000 (0:00:00.339) 0:05:57.204 ********** 2025-04-13 00:57:27.403864 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.403871 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.403877 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.403883 | orchestrator | 2025-04-13 00:57:27.403889 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-13 00:57:27.403895 | orchestrator | Sunday 13 April 2025 00:50:23 +0000 (0:00:00.339) 0:05:57.543 ********** 2025-04-13 00:57:27.403905 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.403911 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.403917 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.403923 | orchestrator | 2025-04-13 00:57:27.403930 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-13 00:57:27.403936 | orchestrator | Sunday 13 April 2025 00:50:23 +0000 (0:00:00.597) 0:05:58.140 ********** 2025-04-13 00:57:27.403942 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.403948 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.403954 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.403960 | orchestrator | 2025-04-13 00:57:27.403966 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-13 00:57:27.403972 | orchestrator | Sunday 13 April 2025 00:50:24 +0000 (0:00:00.368) 0:05:58.509 ********** 2025-04-13 00:57:27.403978 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.403984 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.403990 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.403996 | orchestrator | 2025-04-13 00:57:27.404002 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-13 00:57:27.404011 | orchestrator | Sunday 13 April 2025 00:50:24 +0000 (0:00:00.362) 0:05:58.872 ********** 2025-04-13 00:57:27.404031 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-13 00:57:27.404038 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-13 00:57:27.404044 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.404050 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-13 00:57:27.404056 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-13 00:57:27.404062 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.404069 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-13 00:57:27.404075 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-13 00:57:27.404081 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.404087 | orchestrator | 2025-04-13 00:57:27.404093 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-13 00:57:27.404099 | orchestrator | Sunday 13 April 2025 00:50:24 +0000 (0:00:00.358) 0:05:59.231 ********** 2025-04-13 00:57:27.404105 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-13 00:57:27.404111 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-13 00:57:27.404117 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.404123 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-13 00:57:27.404129 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-13 00:57:27.404135 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.404177 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-13 00:57:27.404183 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-13 00:57:27.404189 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.404196 | orchestrator | 2025-04-13 00:57:27.404202 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-13 00:57:27.404208 | orchestrator | Sunday 13 April 2025 00:50:25 +0000 (0:00:00.647) 0:05:59.878 ********** 2025-04-13 00:57:27.404214 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.404220 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.404226 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.404232 | orchestrator | 2025-04-13 00:57:27.404238 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-13 00:57:27.404248 | orchestrator | Sunday 13 April 2025 00:50:25 +0000 (0:00:00.352) 0:06:00.231 ********** 2025-04-13 00:57:27.404254 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.404260 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.404266 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.404275 | orchestrator | 2025-04-13 00:57:27.404281 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-13 00:57:27.404294 | orchestrator | Sunday 13 April 2025 00:50:26 +0000 (0:00:00.378) 0:06:00.609 ********** 2025-04-13 00:57:27.404300 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.404306 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.404312 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.404318 | orchestrator | 2025-04-13 00:57:27.404324 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-13 00:57:27.404330 | orchestrator | Sunday 13 April 2025 00:50:26 +0000 (0:00:00.364) 0:06:00.974 ********** 2025-04-13 00:57:27.404336 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.404343 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.404349 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.404355 | orchestrator | 2025-04-13 00:57:27.404361 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-13 00:57:27.404367 | orchestrator | Sunday 13 April 2025 00:50:27 +0000 (0:00:00.627) 0:06:01.602 ********** 2025-04-13 00:57:27.404373 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.404379 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.404385 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.404391 | orchestrator | 2025-04-13 00:57:27.404397 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-13 00:57:27.404403 | orchestrator | Sunday 13 April 2025 00:50:27 +0000 (0:00:00.353) 0:06:01.955 ********** 2025-04-13 00:57:27.404410 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.404416 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.404422 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.404428 | orchestrator | 2025-04-13 00:57:27.404434 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-13 00:57:27.404440 | orchestrator | Sunday 13 April 2025 00:50:28 +0000 (0:00:00.355) 0:06:02.310 ********** 2025-04-13 00:57:27.404446 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-13 00:57:27.404452 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-13 00:57:27.404458 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-13 00:57:27.404464 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.404470 | orchestrator | 2025-04-13 00:57:27.404476 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-13 00:57:27.404483 | orchestrator | Sunday 13 April 2025 00:50:28 +0000 (0:00:00.429) 0:06:02.740 ********** 2025-04-13 00:57:27.404489 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-13 00:57:27.404495 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-13 00:57:27.404501 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-13 00:57:27.404507 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.404513 | orchestrator | 2025-04-13 00:57:27.404519 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-13 00:57:27.404526 | orchestrator | Sunday 13 April 2025 00:50:28 +0000 (0:00:00.448) 0:06:03.188 ********** 2025-04-13 00:57:27.404532 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-13 00:57:27.404538 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-13 00:57:27.404544 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-13 00:57:27.404550 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.404556 | orchestrator | 2025-04-13 00:57:27.404562 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-13 00:57:27.404587 | orchestrator | Sunday 13 April 2025 00:50:29 +0000 (0:00:00.764) 0:06:03.952 ********** 2025-04-13 00:57:27.404594 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.404600 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.404606 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.404612 | orchestrator | 2025-04-13 00:57:27.404618 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-13 00:57:27.404629 | orchestrator | Sunday 13 April 2025 00:50:30 +0000 (0:00:00.677) 0:06:04.630 ********** 2025-04-13 00:57:27.404635 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-13 00:57:27.404642 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.404648 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-13 00:57:27.404654 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.404660 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-13 00:57:27.404667 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.404673 | orchestrator | 2025-04-13 00:57:27.404679 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-13 00:57:27.404685 | orchestrator | Sunday 13 April 2025 00:50:30 +0000 (0:00:00.487) 0:06:05.117 ********** 2025-04-13 00:57:27.404691 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.404697 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.404702 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.404708 | orchestrator | 2025-04-13 00:57:27.404714 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-13 00:57:27.404720 | orchestrator | Sunday 13 April 2025 00:50:31 +0000 (0:00:00.427) 0:06:05.545 ********** 2025-04-13 00:57:27.404726 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.404732 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.404738 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.404743 | orchestrator | 2025-04-13 00:57:27.404749 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-13 00:57:27.404755 | orchestrator | Sunday 13 April 2025 00:50:31 +0000 (0:00:00.539) 0:06:06.084 ********** 2025-04-13 00:57:27.404761 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-13 00:57:27.404766 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.404772 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-13 00:57:27.404778 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.404784 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-13 00:57:27.404790 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.404795 | orchestrator | 2025-04-13 00:57:27.404801 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-13 00:57:27.404807 | orchestrator | Sunday 13 April 2025 00:50:32 +0000 (0:00:00.456) 0:06:06.541 ********** 2025-04-13 00:57:27.404813 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.404819 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.404824 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.404830 | orchestrator | 2025-04-13 00:57:27.404836 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-13 00:57:27.404842 | orchestrator | Sunday 13 April 2025 00:50:32 +0000 (0:00:00.311) 0:06:06.853 ********** 2025-04-13 00:57:27.404848 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-13 00:57:27.404853 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-13 00:57:27.404859 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-13 00:57:27.404865 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-13 00:57:27.404871 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-13 00:57:27.404877 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.404883 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-13 00:57:27.404888 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.404894 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-13 00:57:27.404900 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-13 00:57:27.404906 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-13 00:57:27.404911 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.404917 | orchestrator | 2025-04-13 00:57:27.404923 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-13 00:57:27.404929 | orchestrator | Sunday 13 April 2025 00:50:33 +0000 (0:00:00.777) 0:06:07.631 ********** 2025-04-13 00:57:27.404938 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.404944 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.404950 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.404955 | orchestrator | 2025-04-13 00:57:27.404961 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-13 00:57:27.404967 | orchestrator | Sunday 13 April 2025 00:50:33 +0000 (0:00:00.547) 0:06:08.178 ********** 2025-04-13 00:57:27.404973 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.404978 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.404984 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.404990 | orchestrator | 2025-04-13 00:57:27.404998 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-13 00:57:27.405004 | orchestrator | Sunday 13 April 2025 00:50:34 +0000 (0:00:00.685) 0:06:08.863 ********** 2025-04-13 00:57:27.405010 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.405016 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.405022 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.405027 | orchestrator | 2025-04-13 00:57:27.405033 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-13 00:57:27.405039 | orchestrator | Sunday 13 April 2025 00:50:35 +0000 (0:00:00.566) 0:06:09.430 ********** 2025-04-13 00:57:27.405045 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.405051 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.405057 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.405062 | orchestrator | 2025-04-13 00:57:27.405068 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-04-13 00:57:27.405074 | orchestrator | Sunday 13 April 2025 00:50:35 +0000 (0:00:00.851) 0:06:10.281 ********** 2025-04-13 00:57:27.405080 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-13 00:57:27.405099 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-13 00:57:27.405106 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-13 00:57:27.405112 | orchestrator | 2025-04-13 00:57:27.405117 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-04-13 00:57:27.405123 | orchestrator | Sunday 13 April 2025 00:50:36 +0000 (0:00:00.750) 0:06:11.032 ********** 2025-04-13 00:57:27.405129 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:57:27.405135 | orchestrator | 2025-04-13 00:57:27.405181 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-04-13 00:57:27.405188 | orchestrator | Sunday 13 April 2025 00:50:37 +0000 (0:00:00.569) 0:06:11.601 ********** 2025-04-13 00:57:27.405193 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.405199 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.405205 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.405211 | orchestrator | 2025-04-13 00:57:27.405217 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-04-13 00:57:27.405222 | orchestrator | Sunday 13 April 2025 00:50:37 +0000 (0:00:00.695) 0:06:12.296 ********** 2025-04-13 00:57:27.405228 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.405240 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.405246 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.405252 | orchestrator | 2025-04-13 00:57:27.405258 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-04-13 00:57:27.405264 | orchestrator | Sunday 13 April 2025 00:50:38 +0000 (0:00:00.613) 0:06:12.910 ********** 2025-04-13 00:57:27.405270 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-13 00:57:27.405276 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-13 00:57:27.405282 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-13 00:57:27.405287 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-04-13 00:57:27.405297 | orchestrator | 2025-04-13 00:57:27.405303 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-04-13 00:57:27.405309 | orchestrator | Sunday 13 April 2025 00:50:46 +0000 (0:00:07.921) 0:06:20.831 ********** 2025-04-13 00:57:27.405315 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.405321 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.405327 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.405332 | orchestrator | 2025-04-13 00:57:27.405341 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-04-13 00:57:27.405347 | orchestrator | Sunday 13 April 2025 00:50:47 +0000 (0:00:00.621) 0:06:21.452 ********** 2025-04-13 00:57:27.405353 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-04-13 00:57:27.405359 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-04-13 00:57:27.405364 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-04-13 00:57:27.405370 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-04-13 00:57:27.405376 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-13 00:57:27.405382 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-13 00:57:27.405388 | orchestrator | 2025-04-13 00:57:27.405394 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-04-13 00:57:27.405399 | orchestrator | Sunday 13 April 2025 00:50:49 +0000 (0:00:01.896) 0:06:23.349 ********** 2025-04-13 00:57:27.405405 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-04-13 00:57:27.405411 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-04-13 00:57:27.405417 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-04-13 00:57:27.405423 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-13 00:57:27.405428 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-04-13 00:57:27.405434 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-04-13 00:57:27.405440 | orchestrator | 2025-04-13 00:57:27.405446 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-04-13 00:57:27.405451 | orchestrator | Sunday 13 April 2025 00:50:50 +0000 (0:00:01.302) 0:06:24.652 ********** 2025-04-13 00:57:27.405457 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.405463 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.405469 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.405475 | orchestrator | 2025-04-13 00:57:27.405481 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-04-13 00:57:27.405486 | orchestrator | Sunday 13 April 2025 00:50:51 +0000 (0:00:00.930) 0:06:25.582 ********** 2025-04-13 00:57:27.405492 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.405498 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.405504 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.405510 | orchestrator | 2025-04-13 00:57:27.405515 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-04-13 00:57:27.405521 | orchestrator | Sunday 13 April 2025 00:50:51 +0000 (0:00:00.376) 0:06:25.958 ********** 2025-04-13 00:57:27.405527 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.405533 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.405539 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.405544 | orchestrator | 2025-04-13 00:57:27.405550 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-04-13 00:57:27.405556 | orchestrator | Sunday 13 April 2025 00:50:51 +0000 (0:00:00.332) 0:06:26.291 ********** 2025-04-13 00:57:27.405562 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:57:27.405568 | orchestrator | 2025-04-13 00:57:27.405576 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-04-13 00:57:27.405582 | orchestrator | Sunday 13 April 2025 00:50:52 +0000 (0:00:00.858) 0:06:27.150 ********** 2025-04-13 00:57:27.405588 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.405594 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.405603 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.405609 | orchestrator | 2025-04-13 00:57:27.405615 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-04-13 00:57:27.405636 | orchestrator | Sunday 13 April 2025 00:50:53 +0000 (0:00:00.389) 0:06:27.539 ********** 2025-04-13 00:57:27.405642 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.405648 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.405654 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.405660 | orchestrator | 2025-04-13 00:57:27.405666 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-04-13 00:57:27.405672 | orchestrator | Sunday 13 April 2025 00:50:53 +0000 (0:00:00.346) 0:06:27.886 ********** 2025-04-13 00:57:27.405678 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:57:27.405683 | orchestrator | 2025-04-13 00:57:27.405689 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-04-13 00:57:27.405695 | orchestrator | Sunday 13 April 2025 00:50:54 +0000 (0:00:00.791) 0:06:28.677 ********** 2025-04-13 00:57:27.405701 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.405707 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.405712 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.405718 | orchestrator | 2025-04-13 00:57:27.405724 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-04-13 00:57:27.405730 | orchestrator | Sunday 13 April 2025 00:50:55 +0000 (0:00:01.287) 0:06:29.964 ********** 2025-04-13 00:57:27.405735 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.405741 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.405747 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.405753 | orchestrator | 2025-04-13 00:57:27.405759 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-04-13 00:57:27.405764 | orchestrator | Sunday 13 April 2025 00:50:56 +0000 (0:00:01.190) 0:06:31.154 ********** 2025-04-13 00:57:27.405770 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.405776 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.405782 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.405787 | orchestrator | 2025-04-13 00:57:27.405793 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-04-13 00:57:27.405799 | orchestrator | Sunday 13 April 2025 00:50:58 +0000 (0:00:01.958) 0:06:33.113 ********** 2025-04-13 00:57:27.405805 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.405811 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.405816 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.405822 | orchestrator | 2025-04-13 00:57:27.405828 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-04-13 00:57:27.405834 | orchestrator | Sunday 13 April 2025 00:51:00 +0000 (0:00:01.895) 0:06:35.009 ********** 2025-04-13 00:57:27.405840 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.405845 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.405851 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-04-13 00:57:27.405857 | orchestrator | 2025-04-13 00:57:27.405863 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-04-13 00:57:27.405868 | orchestrator | Sunday 13 April 2025 00:51:01 +0000 (0:00:00.626) 0:06:35.636 ********** 2025-04-13 00:57:27.405874 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-04-13 00:57:27.405880 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-04-13 00:57:27.405886 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-04-13 00:57:27.405892 | orchestrator | 2025-04-13 00:57:27.405898 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-04-13 00:57:27.405903 | orchestrator | Sunday 13 April 2025 00:51:14 +0000 (0:00:13.511) 0:06:49.147 ********** 2025-04-13 00:57:27.405909 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-04-13 00:57:27.405920 | orchestrator | 2025-04-13 00:57:27.405925 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-04-13 00:57:27.405931 | orchestrator | Sunday 13 April 2025 00:51:16 +0000 (0:00:01.680) 0:06:50.828 ********** 2025-04-13 00:57:27.405937 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.405943 | orchestrator | 2025-04-13 00:57:27.405948 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-04-13 00:57:27.405954 | orchestrator | Sunday 13 April 2025 00:51:16 +0000 (0:00:00.438) 0:06:51.266 ********** 2025-04-13 00:57:27.405960 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.405966 | orchestrator | 2025-04-13 00:57:27.405971 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-04-13 00:57:27.405977 | orchestrator | Sunday 13 April 2025 00:51:17 +0000 (0:00:00.336) 0:06:51.603 ********** 2025-04-13 00:57:27.405983 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-04-13 00:57:27.405989 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-04-13 00:57:27.405994 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-04-13 00:57:27.406000 | orchestrator | 2025-04-13 00:57:27.406006 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-04-13 00:57:27.406030 | orchestrator | Sunday 13 April 2025 00:51:24 +0000 (0:00:06.797) 0:06:58.400 ********** 2025-04-13 00:57:27.406037 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-04-13 00:57:27.406043 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-04-13 00:57:27.406049 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-04-13 00:57:27.406055 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-04-13 00:57:27.406061 | orchestrator | 2025-04-13 00:57:27.406067 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-13 00:57:27.406072 | orchestrator | Sunday 13 April 2025 00:51:28 +0000 (0:00:04.885) 0:07:03.286 ********** 2025-04-13 00:57:27.406078 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.406084 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.406090 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.406096 | orchestrator | 2025-04-13 00:57:27.406118 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-04-13 00:57:27.406125 | orchestrator | Sunday 13 April 2025 00:51:29 +0000 (0:00:00.726) 0:07:04.013 ********** 2025-04-13 00:57:27.406131 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:57:27.406149 | orchestrator | 2025-04-13 00:57:27.406156 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-04-13 00:57:27.406162 | orchestrator | Sunday 13 April 2025 00:51:30 +0000 (0:00:00.810) 0:07:04.823 ********** 2025-04-13 00:57:27.406168 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.406174 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.406179 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.406185 | orchestrator | 2025-04-13 00:57:27.406191 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-04-13 00:57:27.406197 | orchestrator | Sunday 13 April 2025 00:51:30 +0000 (0:00:00.344) 0:07:05.167 ********** 2025-04-13 00:57:27.406202 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.406208 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.406214 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.406220 | orchestrator | 2025-04-13 00:57:27.406226 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-04-13 00:57:27.406232 | orchestrator | Sunday 13 April 2025 00:51:32 +0000 (0:00:01.468) 0:07:06.635 ********** 2025-04-13 00:57:27.406237 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-13 00:57:27.406243 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-13 00:57:27.406253 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-13 00:57:27.406259 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.406264 | orchestrator | 2025-04-13 00:57:27.406270 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-04-13 00:57:27.406276 | orchestrator | Sunday 13 April 2025 00:51:33 +0000 (0:00:00.700) 0:07:07.336 ********** 2025-04-13 00:57:27.406282 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.406288 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.406293 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.406302 | orchestrator | 2025-04-13 00:57:27.406308 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-13 00:57:27.406314 | orchestrator | Sunday 13 April 2025 00:51:33 +0000 (0:00:00.369) 0:07:07.706 ********** 2025-04-13 00:57:27.406319 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.406325 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.406331 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.406337 | orchestrator | 2025-04-13 00:57:27.406343 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-04-13 00:57:27.406348 | orchestrator | 2025-04-13 00:57:27.406354 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-13 00:57:27.406360 | orchestrator | Sunday 13 April 2025 00:51:35 +0000 (0:00:02.132) 0:07:09.838 ********** 2025-04-13 00:57:27.406366 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.406372 | orchestrator | 2025-04-13 00:57:27.406377 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-13 00:57:27.406383 | orchestrator | Sunday 13 April 2025 00:51:36 +0000 (0:00:00.868) 0:07:10.706 ********** 2025-04-13 00:57:27.406389 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.406395 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.406401 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.406406 | orchestrator | 2025-04-13 00:57:27.406412 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-13 00:57:27.406418 | orchestrator | Sunday 13 April 2025 00:51:36 +0000 (0:00:00.312) 0:07:11.019 ********** 2025-04-13 00:57:27.406424 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.406429 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.406435 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.406441 | orchestrator | 2025-04-13 00:57:27.406447 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-13 00:57:27.406452 | orchestrator | Sunday 13 April 2025 00:51:37 +0000 (0:00:00.649) 0:07:11.669 ********** 2025-04-13 00:57:27.406458 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.406464 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.406470 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.406475 | orchestrator | 2025-04-13 00:57:27.406481 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-13 00:57:27.406487 | orchestrator | Sunday 13 April 2025 00:51:38 +0000 (0:00:01.026) 0:07:12.695 ********** 2025-04-13 00:57:27.406493 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.406498 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.406504 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.406510 | orchestrator | 2025-04-13 00:57:27.406516 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-13 00:57:27.406522 | orchestrator | Sunday 13 April 2025 00:51:39 +0000 (0:00:00.727) 0:07:13.423 ********** 2025-04-13 00:57:27.406527 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.406533 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.406539 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.406545 | orchestrator | 2025-04-13 00:57:27.406551 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-13 00:57:27.406556 | orchestrator | Sunday 13 April 2025 00:51:39 +0000 (0:00:00.326) 0:07:13.750 ********** 2025-04-13 00:57:27.406562 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.406571 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.406577 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.406583 | orchestrator | 2025-04-13 00:57:27.406592 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-13 00:57:27.406597 | orchestrator | Sunday 13 April 2025 00:51:40 +0000 (0:00:00.565) 0:07:14.315 ********** 2025-04-13 00:57:27.406603 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.406609 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.406615 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.406621 | orchestrator | 2025-04-13 00:57:27.406626 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-13 00:57:27.406647 | orchestrator | Sunday 13 April 2025 00:51:40 +0000 (0:00:00.356) 0:07:14.671 ********** 2025-04-13 00:57:27.406654 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.406660 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.406666 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.406672 | orchestrator | 2025-04-13 00:57:27.406677 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-13 00:57:27.406683 | orchestrator | Sunday 13 April 2025 00:51:40 +0000 (0:00:00.348) 0:07:15.020 ********** 2025-04-13 00:57:27.406689 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.406695 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.406701 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.406706 | orchestrator | 2025-04-13 00:57:27.406712 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-13 00:57:27.406718 | orchestrator | Sunday 13 April 2025 00:51:41 +0000 (0:00:00.337) 0:07:15.357 ********** 2025-04-13 00:57:27.406724 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.406729 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.406735 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.406741 | orchestrator | 2025-04-13 00:57:27.406747 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-13 00:57:27.406752 | orchestrator | Sunday 13 April 2025 00:51:41 +0000 (0:00:00.610) 0:07:15.968 ********** 2025-04-13 00:57:27.406758 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.406764 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.406770 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.406776 | orchestrator | 2025-04-13 00:57:27.406782 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-13 00:57:27.406787 | orchestrator | Sunday 13 April 2025 00:51:42 +0000 (0:00:00.727) 0:07:16.695 ********** 2025-04-13 00:57:27.406793 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.406799 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.406805 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.406811 | orchestrator | 2025-04-13 00:57:27.406816 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-13 00:57:27.406822 | orchestrator | Sunday 13 April 2025 00:51:42 +0000 (0:00:00.331) 0:07:17.027 ********** 2025-04-13 00:57:27.406828 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.406834 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.406839 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.406845 | orchestrator | 2025-04-13 00:57:27.406851 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-13 00:57:27.406857 | orchestrator | Sunday 13 April 2025 00:51:43 +0000 (0:00:00.338) 0:07:17.365 ********** 2025-04-13 00:57:27.406863 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.406869 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.406874 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.406880 | orchestrator | 2025-04-13 00:57:27.406886 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-13 00:57:27.406892 | orchestrator | Sunday 13 April 2025 00:51:43 +0000 (0:00:00.619) 0:07:17.985 ********** 2025-04-13 00:57:27.406898 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.406903 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.406913 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.406918 | orchestrator | 2025-04-13 00:57:27.406924 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-13 00:57:27.406930 | orchestrator | Sunday 13 April 2025 00:51:44 +0000 (0:00:00.338) 0:07:18.324 ********** 2025-04-13 00:57:27.406936 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.406942 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.406948 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.406956 | orchestrator | 2025-04-13 00:57:27.406962 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-13 00:57:27.406968 | orchestrator | Sunday 13 April 2025 00:51:44 +0000 (0:00:00.344) 0:07:18.669 ********** 2025-04-13 00:57:27.406974 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.406980 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.406985 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.406991 | orchestrator | 2025-04-13 00:57:27.406997 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-13 00:57:27.407003 | orchestrator | Sunday 13 April 2025 00:51:44 +0000 (0:00:00.335) 0:07:19.004 ********** 2025-04-13 00:57:27.407009 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.407014 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.407020 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.407026 | orchestrator | 2025-04-13 00:57:27.407031 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-13 00:57:27.407037 | orchestrator | Sunday 13 April 2025 00:51:45 +0000 (0:00:00.628) 0:07:19.633 ********** 2025-04-13 00:57:27.407043 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.407049 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.407054 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.407060 | orchestrator | 2025-04-13 00:57:27.407066 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-13 00:57:27.407072 | orchestrator | Sunday 13 April 2025 00:51:45 +0000 (0:00:00.368) 0:07:20.001 ********** 2025-04-13 00:57:27.407077 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.407083 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.407089 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.407095 | orchestrator | 2025-04-13 00:57:27.407101 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-13 00:57:27.407106 | orchestrator | Sunday 13 April 2025 00:51:46 +0000 (0:00:00.341) 0:07:20.343 ********** 2025-04-13 00:57:27.407112 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.407118 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.407124 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.407129 | orchestrator | 2025-04-13 00:57:27.407135 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-13 00:57:27.407170 | orchestrator | Sunday 13 April 2025 00:51:46 +0000 (0:00:00.348) 0:07:20.692 ********** 2025-04-13 00:57:27.407177 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.407183 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.407188 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.407194 | orchestrator | 2025-04-13 00:57:27.407204 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-13 00:57:27.407224 | orchestrator | Sunday 13 April 2025 00:51:47 +0000 (0:00:00.617) 0:07:21.310 ********** 2025-04-13 00:57:27.407231 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.407237 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.407243 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.407249 | orchestrator | 2025-04-13 00:57:27.407255 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-13 00:57:27.407261 | orchestrator | Sunday 13 April 2025 00:51:47 +0000 (0:00:00.332) 0:07:21.643 ********** 2025-04-13 00:57:27.407266 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.407272 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.407278 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.407288 | orchestrator | 2025-04-13 00:57:27.407294 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-13 00:57:27.407300 | orchestrator | Sunday 13 April 2025 00:51:47 +0000 (0:00:00.381) 0:07:22.024 ********** 2025-04-13 00:57:27.407305 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.407311 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.407317 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.407323 | orchestrator | 2025-04-13 00:57:27.407329 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-13 00:57:27.407334 | orchestrator | Sunday 13 April 2025 00:51:48 +0000 (0:00:00.370) 0:07:22.395 ********** 2025-04-13 00:57:27.407340 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.407346 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.407352 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.407357 | orchestrator | 2025-04-13 00:57:27.407363 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-13 00:57:27.407369 | orchestrator | Sunday 13 April 2025 00:51:48 +0000 (0:00:00.588) 0:07:22.983 ********** 2025-04-13 00:57:27.407375 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.407380 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.407386 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.407392 | orchestrator | 2025-04-13 00:57:27.407398 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-13 00:57:27.407404 | orchestrator | Sunday 13 April 2025 00:51:49 +0000 (0:00:00.342) 0:07:23.326 ********** 2025-04-13 00:57:27.407410 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.407416 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.407422 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.407428 | orchestrator | 2025-04-13 00:57:27.407433 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-13 00:57:27.407439 | orchestrator | Sunday 13 April 2025 00:51:49 +0000 (0:00:00.375) 0:07:23.701 ********** 2025-04-13 00:57:27.407445 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.407451 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.407457 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.407463 | orchestrator | 2025-04-13 00:57:27.407469 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-13 00:57:27.407475 | orchestrator | Sunday 13 April 2025 00:51:49 +0000 (0:00:00.450) 0:07:24.152 ********** 2025-04-13 00:57:27.407481 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.407486 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.407492 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.407498 | orchestrator | 2025-04-13 00:57:27.407504 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-13 00:57:27.407509 | orchestrator | Sunday 13 April 2025 00:51:50 +0000 (0:00:00.635) 0:07:24.787 ********** 2025-04-13 00:57:27.407515 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.407521 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.407527 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.407532 | orchestrator | 2025-04-13 00:57:27.407538 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-13 00:57:27.407544 | orchestrator | Sunday 13 April 2025 00:51:50 +0000 (0:00:00.379) 0:07:25.166 ********** 2025-04-13 00:57:27.407550 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.407556 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.407561 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.407567 | orchestrator | 2025-04-13 00:57:27.407573 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-13 00:57:27.407579 | orchestrator | Sunday 13 April 2025 00:51:51 +0000 (0:00:00.339) 0:07:25.506 ********** 2025-04-13 00:57:27.407585 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-13 00:57:27.407590 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-13 00:57:27.407600 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.407606 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-13 00:57:27.407611 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-13 00:57:27.407616 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.407621 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-13 00:57:27.407627 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-13 00:57:27.407632 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.407640 | orchestrator | 2025-04-13 00:57:27.407645 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-13 00:57:27.407651 | orchestrator | Sunday 13 April 2025 00:51:51 +0000 (0:00:00.393) 0:07:25.899 ********** 2025-04-13 00:57:27.407656 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-13 00:57:27.407664 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-13 00:57:27.407669 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.407674 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-13 00:57:27.407680 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-13 00:57:27.407685 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.407690 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-13 00:57:27.407695 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-13 00:57:27.407701 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.407706 | orchestrator | 2025-04-13 00:57:27.407711 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-13 00:57:27.407728 | orchestrator | Sunday 13 April 2025 00:51:52 +0000 (0:00:00.641) 0:07:26.541 ********** 2025-04-13 00:57:27.407734 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.407739 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.407745 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.407750 | orchestrator | 2025-04-13 00:57:27.407755 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-13 00:57:27.407762 | orchestrator | Sunday 13 April 2025 00:51:52 +0000 (0:00:00.373) 0:07:26.914 ********** 2025-04-13 00:57:27.407771 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.407781 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.407789 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.407799 | orchestrator | 2025-04-13 00:57:27.407808 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-13 00:57:27.407817 | orchestrator | Sunday 13 April 2025 00:51:52 +0000 (0:00:00.349) 0:07:27.263 ********** 2025-04-13 00:57:27.407825 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.407835 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.407840 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.407846 | orchestrator | 2025-04-13 00:57:27.407851 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-13 00:57:27.407856 | orchestrator | Sunday 13 April 2025 00:51:53 +0000 (0:00:00.405) 0:07:27.669 ********** 2025-04-13 00:57:27.407861 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.407867 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.407872 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.407877 | orchestrator | 2025-04-13 00:57:27.407882 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-13 00:57:27.407887 | orchestrator | Sunday 13 April 2025 00:51:53 +0000 (0:00:00.605) 0:07:28.275 ********** 2025-04-13 00:57:27.407892 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.407898 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.407903 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.407908 | orchestrator | 2025-04-13 00:57:27.407913 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-13 00:57:27.407918 | orchestrator | Sunday 13 April 2025 00:51:54 +0000 (0:00:00.330) 0:07:28.606 ********** 2025-04-13 00:57:27.407928 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.407933 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.407939 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.407944 | orchestrator | 2025-04-13 00:57:27.407949 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-13 00:57:27.407957 | orchestrator | Sunday 13 April 2025 00:51:54 +0000 (0:00:00.384) 0:07:28.991 ********** 2025-04-13 00:57:27.407962 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.407968 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.407973 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.407978 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.407983 | orchestrator | 2025-04-13 00:57:27.407988 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-13 00:57:27.407994 | orchestrator | Sunday 13 April 2025 00:51:55 +0000 (0:00:00.447) 0:07:29.439 ********** 2025-04-13 00:57:27.407999 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.408004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.408009 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.408014 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.408020 | orchestrator | 2025-04-13 00:57:27.408025 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-13 00:57:27.408030 | orchestrator | Sunday 13 April 2025 00:51:55 +0000 (0:00:00.473) 0:07:29.913 ********** 2025-04-13 00:57:27.408035 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.408040 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.408046 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.408051 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.408056 | orchestrator | 2025-04-13 00:57:27.408061 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-13 00:57:27.408066 | orchestrator | Sunday 13 April 2025 00:51:56 +0000 (0:00:00.433) 0:07:30.346 ********** 2025-04-13 00:57:27.408072 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.408077 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.408082 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.408087 | orchestrator | 2025-04-13 00:57:27.408092 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-13 00:57:27.408097 | orchestrator | Sunday 13 April 2025 00:51:56 +0000 (0:00:00.572) 0:07:30.918 ********** 2025-04-13 00:57:27.408103 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-13 00:57:27.408108 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-13 00:57:27.408113 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.408118 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.408123 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-13 00:57:27.408128 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.408134 | orchestrator | 2025-04-13 00:57:27.408151 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-13 00:57:27.408157 | orchestrator | Sunday 13 April 2025 00:51:57 +0000 (0:00:00.501) 0:07:31.420 ********** 2025-04-13 00:57:27.408162 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.408167 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.408173 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.408178 | orchestrator | 2025-04-13 00:57:27.408183 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-13 00:57:27.408188 | orchestrator | Sunday 13 April 2025 00:51:57 +0000 (0:00:00.333) 0:07:31.753 ********** 2025-04-13 00:57:27.408193 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.408199 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.408204 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.408213 | orchestrator | 2025-04-13 00:57:27.408232 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-13 00:57:27.408238 | orchestrator | Sunday 13 April 2025 00:51:57 +0000 (0:00:00.381) 0:07:32.134 ********** 2025-04-13 00:57:27.408244 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-13 00:57:27.408249 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-13 00:57:27.408254 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.408260 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.408265 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-13 00:57:27.408270 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.408275 | orchestrator | 2025-04-13 00:57:27.408281 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-13 00:57:27.408286 | orchestrator | Sunday 13 April 2025 00:51:58 +0000 (0:00:00.798) 0:07:32.932 ********** 2025-04-13 00:57:27.408291 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-13 00:57:27.408297 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.408302 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-13 00:57:27.408307 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.408313 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-13 00:57:27.408318 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.408323 | orchestrator | 2025-04-13 00:57:27.408328 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-13 00:57:27.408334 | orchestrator | Sunday 13 April 2025 00:51:58 +0000 (0:00:00.360) 0:07:33.293 ********** 2025-04-13 00:57:27.408339 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.408344 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.408349 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.408355 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-13 00:57:27.408360 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-13 00:57:27.408365 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-13 00:57:27.408370 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.408376 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.408381 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-13 00:57:27.408386 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-13 00:57:27.408392 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-13 00:57:27.408397 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.408402 | orchestrator | 2025-04-13 00:57:27.408408 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-13 00:57:27.408413 | orchestrator | Sunday 13 April 2025 00:51:59 +0000 (0:00:00.613) 0:07:33.906 ********** 2025-04-13 00:57:27.408418 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.408424 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.408429 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.408434 | orchestrator | 2025-04-13 00:57:27.408439 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-13 00:57:27.408445 | orchestrator | Sunday 13 April 2025 00:52:00 +0000 (0:00:00.855) 0:07:34.762 ********** 2025-04-13 00:57:27.408450 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-13 00:57:27.408455 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.408461 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-13 00:57:27.408466 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.408471 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-13 00:57:27.408477 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.408482 | orchestrator | 2025-04-13 00:57:27.408491 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-13 00:57:27.408496 | orchestrator | Sunday 13 April 2025 00:52:01 +0000 (0:00:00.581) 0:07:35.344 ********** 2025-04-13 00:57:27.408501 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.408510 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.408516 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.408521 | orchestrator | 2025-04-13 00:57:27.408527 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-13 00:57:27.408532 | orchestrator | Sunday 13 April 2025 00:52:01 +0000 (0:00:00.869) 0:07:36.213 ********** 2025-04-13 00:57:27.408537 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.408543 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.408548 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.408553 | orchestrator | 2025-04-13 00:57:27.408558 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-04-13 00:57:27.408564 | orchestrator | Sunday 13 April 2025 00:52:02 +0000 (0:00:00.540) 0:07:36.754 ********** 2025-04-13 00:57:27.408569 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.408574 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.408580 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.408585 | orchestrator | 2025-04-13 00:57:27.408590 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-04-13 00:57:27.408595 | orchestrator | Sunday 13 April 2025 00:52:03 +0000 (0:00:00.620) 0:07:37.375 ********** 2025-04-13 00:57:27.408603 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-13 00:57:27.408609 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-13 00:57:27.408614 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-13 00:57:27.408619 | orchestrator | 2025-04-13 00:57:27.408625 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-04-13 00:57:27.408630 | orchestrator | Sunday 13 April 2025 00:52:03 +0000 (0:00:00.699) 0:07:38.074 ********** 2025-04-13 00:57:27.408647 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.408653 | orchestrator | 2025-04-13 00:57:27.408658 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-04-13 00:57:27.408664 | orchestrator | Sunday 13 April 2025 00:52:04 +0000 (0:00:00.564) 0:07:38.639 ********** 2025-04-13 00:57:27.408669 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.408674 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.408680 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.408685 | orchestrator | 2025-04-13 00:57:27.408690 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-04-13 00:57:27.408695 | orchestrator | Sunday 13 April 2025 00:52:04 +0000 (0:00:00.340) 0:07:38.979 ********** 2025-04-13 00:57:27.408701 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.408706 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.408711 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.408717 | orchestrator | 2025-04-13 00:57:27.408722 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-04-13 00:57:27.408727 | orchestrator | Sunday 13 April 2025 00:52:05 +0000 (0:00:00.626) 0:07:39.605 ********** 2025-04-13 00:57:27.408732 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.408738 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.408743 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.408748 | orchestrator | 2025-04-13 00:57:27.408753 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-04-13 00:57:27.408759 | orchestrator | Sunday 13 April 2025 00:52:05 +0000 (0:00:00.344) 0:07:39.950 ********** 2025-04-13 00:57:27.408764 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.408769 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.408774 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.408783 | orchestrator | 2025-04-13 00:57:27.408789 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-04-13 00:57:27.408794 | orchestrator | Sunday 13 April 2025 00:52:05 +0000 (0:00:00.299) 0:07:40.250 ********** 2025-04-13 00:57:27.408799 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.408804 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.408810 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.408815 | orchestrator | 2025-04-13 00:57:27.408820 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-04-13 00:57:27.408826 | orchestrator | Sunday 13 April 2025 00:52:06 +0000 (0:00:00.684) 0:07:40.934 ********** 2025-04-13 00:57:27.408831 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.408836 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.408842 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.408847 | orchestrator | 2025-04-13 00:57:27.408852 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-04-13 00:57:27.408857 | orchestrator | Sunday 13 April 2025 00:52:07 +0000 (0:00:00.694) 0:07:41.628 ********** 2025-04-13 00:57:27.408863 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-04-13 00:57:27.408871 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-04-13 00:57:27.408877 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-04-13 00:57:27.408882 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-04-13 00:57:27.408888 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-04-13 00:57:27.408893 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-04-13 00:57:27.408898 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-04-13 00:57:27.408904 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-04-13 00:57:27.408909 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-04-13 00:57:27.408914 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-04-13 00:57:27.408920 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-04-13 00:57:27.408925 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-04-13 00:57:27.408930 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-04-13 00:57:27.408935 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-04-13 00:57:27.408941 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-04-13 00:57:27.408946 | orchestrator | 2025-04-13 00:57:27.408951 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-04-13 00:57:27.408957 | orchestrator | Sunday 13 April 2025 00:52:09 +0000 (0:00:02.088) 0:07:43.717 ********** 2025-04-13 00:57:27.408962 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.408967 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.408972 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.408978 | orchestrator | 2025-04-13 00:57:27.408983 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-04-13 00:57:27.408988 | orchestrator | Sunday 13 April 2025 00:52:09 +0000 (0:00:00.320) 0:07:44.037 ********** 2025-04-13 00:57:27.408993 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.408999 | orchestrator | 2025-04-13 00:57:27.409006 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-04-13 00:57:27.409012 | orchestrator | Sunday 13 April 2025 00:52:10 +0000 (0:00:00.873) 0:07:44.910 ********** 2025-04-13 00:57:27.409020 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-04-13 00:57:27.409038 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-04-13 00:57:27.409044 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-04-13 00:57:27.409049 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-04-13 00:57:27.409054 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-04-13 00:57:27.409059 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-04-13 00:57:27.409065 | orchestrator | 2025-04-13 00:57:27.409070 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-04-13 00:57:27.409075 | orchestrator | Sunday 13 April 2025 00:52:11 +0000 (0:00:00.995) 0:07:45.905 ********** 2025-04-13 00:57:27.409081 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-13 00:57:27.409086 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-13 00:57:27.409091 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-13 00:57:27.409096 | orchestrator | 2025-04-13 00:57:27.409102 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-04-13 00:57:27.409110 | orchestrator | Sunday 13 April 2025 00:52:13 +0000 (0:00:01.750) 0:07:47.656 ********** 2025-04-13 00:57:27.409116 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-13 00:57:27.409121 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-13 00:57:27.409126 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.409135 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-13 00:57:27.409168 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-13 00:57:27.409174 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.409179 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-13 00:57:27.409184 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-13 00:57:27.409189 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.409195 | orchestrator | 2025-04-13 00:57:27.409200 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-04-13 00:57:27.409205 | orchestrator | Sunday 13 April 2025 00:52:14 +0000 (0:00:01.542) 0:07:49.199 ********** 2025-04-13 00:57:27.409211 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-13 00:57:27.409216 | orchestrator | 2025-04-13 00:57:27.409221 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-04-13 00:57:27.409226 | orchestrator | Sunday 13 April 2025 00:52:16 +0000 (0:00:02.075) 0:07:51.274 ********** 2025-04-13 00:57:27.409231 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.409237 | orchestrator | 2025-04-13 00:57:27.409242 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-04-13 00:57:27.409247 | orchestrator | Sunday 13 April 2025 00:52:17 +0000 (0:00:00.546) 0:07:51.820 ********** 2025-04-13 00:57:27.409257 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.409262 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.409268 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.409273 | orchestrator | 2025-04-13 00:57:27.409278 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-04-13 00:57:27.409283 | orchestrator | Sunday 13 April 2025 00:52:18 +0000 (0:00:00.563) 0:07:52.383 ********** 2025-04-13 00:57:27.409289 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.409294 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.409299 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.409304 | orchestrator | 2025-04-13 00:57:27.409309 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-04-13 00:57:27.409315 | orchestrator | Sunday 13 April 2025 00:52:18 +0000 (0:00:00.338) 0:07:52.722 ********** 2025-04-13 00:57:27.409320 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.409329 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.409334 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.409339 | orchestrator | 2025-04-13 00:57:27.409345 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-04-13 00:57:27.409350 | orchestrator | Sunday 13 April 2025 00:52:18 +0000 (0:00:00.337) 0:07:53.060 ********** 2025-04-13 00:57:27.409355 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.409360 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.409366 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.409371 | orchestrator | 2025-04-13 00:57:27.409376 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-04-13 00:57:27.409380 | orchestrator | Sunday 13 April 2025 00:52:19 +0000 (0:00:00.367) 0:07:53.428 ********** 2025-04-13 00:57:27.409385 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.409390 | orchestrator | 2025-04-13 00:57:27.409395 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-04-13 00:57:27.409400 | orchestrator | Sunday 13 April 2025 00:52:19 +0000 (0:00:00.870) 0:07:54.299 ********** 2025-04-13 00:57:27.409405 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2045bad1-ab77-5a33-981a-e42fb4136085', 'data_vg': 'ceph-2045bad1-ab77-5a33-981a-e42fb4136085'}) 2025-04-13 00:57:27.409410 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a50ad019-9a42-5399-96dd-0ec75fe99929', 'data_vg': 'ceph-a50ad019-9a42-5399-96dd-0ec75fe99929'}) 2025-04-13 00:57:27.409415 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a', 'data_vg': 'ceph-c75c5404-ac9a-5ffa-97a7-d9feeb5e7a2a'}) 2025-04-13 00:57:27.409420 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-075038e7-2b9c-5de1-9fc0-4ab80f908b26', 'data_vg': 'ceph-075038e7-2b9c-5de1-9fc0-4ab80f908b26'}) 2025-04-13 00:57:27.409437 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23', 'data_vg': 'ceph-c1aa12de-f4f1-5fa1-83b9-2c9c84fd1e23'}) 2025-04-13 00:57:27.409443 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-cc16a9be-1c89-5ed3-8c34-f79b9c168598', 'data_vg': 'ceph-cc16a9be-1c89-5ed3-8c34-f79b9c168598'}) 2025-04-13 00:57:27.409448 | orchestrator | 2025-04-13 00:57:27.409453 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-04-13 00:57:27.409458 | orchestrator | Sunday 13 April 2025 00:53:01 +0000 (0:00:41.755) 0:08:36.055 ********** 2025-04-13 00:57:27.409462 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.409467 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.409472 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.409477 | orchestrator | 2025-04-13 00:57:27.409481 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-04-13 00:57:27.409486 | orchestrator | Sunday 13 April 2025 00:53:02 +0000 (0:00:00.587) 0:08:36.642 ********** 2025-04-13 00:57:27.409491 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.409496 | orchestrator | 2025-04-13 00:57:27.409500 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-04-13 00:57:27.409505 | orchestrator | Sunday 13 April 2025 00:53:02 +0000 (0:00:00.533) 0:08:37.176 ********** 2025-04-13 00:57:27.409510 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.409515 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.409519 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.409527 | orchestrator | 2025-04-13 00:57:27.409532 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-04-13 00:57:27.409536 | orchestrator | Sunday 13 April 2025 00:53:03 +0000 (0:00:00.643) 0:08:37.820 ********** 2025-04-13 00:57:27.409541 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.409546 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.409551 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.409559 | orchestrator | 2025-04-13 00:57:27.409564 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-04-13 00:57:27.409568 | orchestrator | Sunday 13 April 2025 00:53:05 +0000 (0:00:01.889) 0:08:39.710 ********** 2025-04-13 00:57:27.409573 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.409578 | orchestrator | 2025-04-13 00:57:27.409583 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-04-13 00:57:27.409587 | orchestrator | Sunday 13 April 2025 00:53:05 +0000 (0:00:00.551) 0:08:40.261 ********** 2025-04-13 00:57:27.409592 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.409597 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.409602 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.409606 | orchestrator | 2025-04-13 00:57:27.409611 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-04-13 00:57:27.409618 | orchestrator | Sunday 13 April 2025 00:53:07 +0000 (0:00:01.417) 0:08:41.679 ********** 2025-04-13 00:57:27.409623 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.409628 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.409633 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.409638 | orchestrator | 2025-04-13 00:57:27.409642 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-04-13 00:57:27.409647 | orchestrator | Sunday 13 April 2025 00:53:08 +0000 (0:00:01.169) 0:08:42.848 ********** 2025-04-13 00:57:27.409652 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.409657 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.409661 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.409666 | orchestrator | 2025-04-13 00:57:27.409671 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-04-13 00:57:27.409676 | orchestrator | Sunday 13 April 2025 00:53:10 +0000 (0:00:01.607) 0:08:44.456 ********** 2025-04-13 00:57:27.409680 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.409685 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.409690 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.409695 | orchestrator | 2025-04-13 00:57:27.409699 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-04-13 00:57:27.409704 | orchestrator | Sunday 13 April 2025 00:53:10 +0000 (0:00:00.373) 0:08:44.829 ********** 2025-04-13 00:57:27.409709 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.409714 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.409718 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.409723 | orchestrator | 2025-04-13 00:57:27.409728 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-04-13 00:57:27.409733 | orchestrator | Sunday 13 April 2025 00:53:11 +0000 (0:00:00.603) 0:08:45.433 ********** 2025-04-13 00:57:27.409738 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-13 00:57:27.409742 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-04-13 00:57:27.409747 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-04-13 00:57:27.409752 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-04-13 00:57:27.409757 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-04-13 00:57:27.409761 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-04-13 00:57:27.409766 | orchestrator | 2025-04-13 00:57:27.409771 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-04-13 00:57:27.409776 | orchestrator | Sunday 13 April 2025 00:53:12 +0000 (0:00:01.049) 0:08:46.482 ********** 2025-04-13 00:57:27.409781 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-04-13 00:57:27.409785 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-04-13 00:57:27.409790 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-04-13 00:57:27.409795 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-04-13 00:57:27.409800 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-04-13 00:57:27.409804 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-04-13 00:57:27.409809 | orchestrator | 2025-04-13 00:57:27.409816 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-04-13 00:57:27.409831 | orchestrator | Sunday 13 April 2025 00:53:15 +0000 (0:00:03.229) 0:08:49.712 ********** 2025-04-13 00:57:27.409837 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.409841 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.409846 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-13 00:57:27.409851 | orchestrator | 2025-04-13 00:57:27.409856 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-04-13 00:57:27.409861 | orchestrator | Sunday 13 April 2025 00:53:18 +0000 (0:00:02.635) 0:08:52.348 ********** 2025-04-13 00:57:27.409866 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.409870 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.409875 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-04-13 00:57:27.409880 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-13 00:57:27.409885 | orchestrator | 2025-04-13 00:57:27.409890 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-04-13 00:57:27.409895 | orchestrator | Sunday 13 April 2025 00:53:30 +0000 (0:00:12.681) 0:09:05.029 ********** 2025-04-13 00:57:27.409899 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.409904 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.409909 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.409914 | orchestrator | 2025-04-13 00:57:27.409918 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-04-13 00:57:27.409923 | orchestrator | Sunday 13 April 2025 00:53:31 +0000 (0:00:00.472) 0:09:05.502 ********** 2025-04-13 00:57:27.409928 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.409933 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.409937 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.409942 | orchestrator | 2025-04-13 00:57:27.409947 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-13 00:57:27.409952 | orchestrator | Sunday 13 April 2025 00:53:32 +0000 (0:00:01.289) 0:09:06.791 ********** 2025-04-13 00:57:27.409956 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.409961 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.409966 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.409971 | orchestrator | 2025-04-13 00:57:27.409975 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-04-13 00:57:27.409980 | orchestrator | Sunday 13 April 2025 00:53:33 +0000 (0:00:00.663) 0:09:07.454 ********** 2025-04-13 00:57:27.409985 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.409990 | orchestrator | 2025-04-13 00:57:27.409994 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-04-13 00:57:27.409999 | orchestrator | Sunday 13 April 2025 00:53:34 +0000 (0:00:00.863) 0:09:08.318 ********** 2025-04-13 00:57:27.410004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.410009 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.410027 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.410033 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.410038 | orchestrator | 2025-04-13 00:57:27.410042 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-04-13 00:57:27.410047 | orchestrator | Sunday 13 April 2025 00:53:34 +0000 (0:00:00.419) 0:09:08.738 ********** 2025-04-13 00:57:27.410052 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.410057 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.410061 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.410066 | orchestrator | 2025-04-13 00:57:27.410071 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-04-13 00:57:27.410076 | orchestrator | Sunday 13 April 2025 00:53:34 +0000 (0:00:00.315) 0:09:09.053 ********** 2025-04-13 00:57:27.410084 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.410089 | orchestrator | 2025-04-13 00:57:27.410094 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-04-13 00:57:27.410099 | orchestrator | Sunday 13 April 2025 00:53:34 +0000 (0:00:00.245) 0:09:09.299 ********** 2025-04-13 00:57:27.410104 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.410108 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.410113 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.410118 | orchestrator | 2025-04-13 00:57:27.410123 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-04-13 00:57:27.410131 | orchestrator | Sunday 13 April 2025 00:53:35 +0000 (0:00:00.590) 0:09:09.890 ********** 2025-04-13 00:57:27.410136 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.410154 | orchestrator | 2025-04-13 00:57:27.410161 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-04-13 00:57:27.410169 | orchestrator | Sunday 13 April 2025 00:53:35 +0000 (0:00:00.260) 0:09:10.150 ********** 2025-04-13 00:57:27.410177 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.410184 | orchestrator | 2025-04-13 00:57:27.410192 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-04-13 00:57:27.410199 | orchestrator | Sunday 13 April 2025 00:53:36 +0000 (0:00:00.246) 0:09:10.397 ********** 2025-04-13 00:57:27.410204 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.410208 | orchestrator | 2025-04-13 00:57:27.410213 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-04-13 00:57:27.410218 | orchestrator | Sunday 13 April 2025 00:53:36 +0000 (0:00:00.116) 0:09:10.514 ********** 2025-04-13 00:57:27.410223 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.410227 | orchestrator | 2025-04-13 00:57:27.410232 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-04-13 00:57:27.410237 | orchestrator | Sunday 13 April 2025 00:53:36 +0000 (0:00:00.241) 0:09:10.756 ********** 2025-04-13 00:57:27.410241 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.410246 | orchestrator | 2025-04-13 00:57:27.410251 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-04-13 00:57:27.410255 | orchestrator | Sunday 13 April 2025 00:53:36 +0000 (0:00:00.240) 0:09:10.996 ********** 2025-04-13 00:57:27.410260 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.410279 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.410284 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.410289 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.410297 | orchestrator | 2025-04-13 00:57:27.410302 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-04-13 00:57:27.410307 | orchestrator | Sunday 13 April 2025 00:53:37 +0000 (0:00:00.413) 0:09:11.409 ********** 2025-04-13 00:57:27.410311 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.410316 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.410321 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.410326 | orchestrator | 2025-04-13 00:57:27.410331 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-04-13 00:57:27.410335 | orchestrator | Sunday 13 April 2025 00:53:37 +0000 (0:00:00.325) 0:09:11.734 ********** 2025-04-13 00:57:27.410340 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.410345 | orchestrator | 2025-04-13 00:57:27.410350 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-04-13 00:57:27.410355 | orchestrator | Sunday 13 April 2025 00:53:38 +0000 (0:00:00.794) 0:09:12.529 ********** 2025-04-13 00:57:27.410359 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.410364 | orchestrator | 2025-04-13 00:57:27.410369 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-13 00:57:27.410374 | orchestrator | Sunday 13 April 2025 00:53:38 +0000 (0:00:00.231) 0:09:12.761 ********** 2025-04-13 00:57:27.410378 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.410387 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.410391 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.410396 | orchestrator | 2025-04-13 00:57:27.410401 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-04-13 00:57:27.410406 | orchestrator | 2025-04-13 00:57:27.410411 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-13 00:57:27.410415 | orchestrator | Sunday 13 April 2025 00:53:41 +0000 (0:00:02.939) 0:09:15.700 ********** 2025-04-13 00:57:27.410420 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.410425 | orchestrator | 2025-04-13 00:57:27.410430 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-13 00:57:27.410435 | orchestrator | Sunday 13 April 2025 00:53:42 +0000 (0:00:01.367) 0:09:17.068 ********** 2025-04-13 00:57:27.410440 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.410445 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.410450 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.410454 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.410459 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.410464 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.410469 | orchestrator | 2025-04-13 00:57:27.410474 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-13 00:57:27.410478 | orchestrator | Sunday 13 April 2025 00:53:43 +0000 (0:00:00.810) 0:09:17.878 ********** 2025-04-13 00:57:27.410483 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.410488 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.410493 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.410498 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.410503 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.410507 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.410512 | orchestrator | 2025-04-13 00:57:27.410517 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-13 00:57:27.410522 | orchestrator | Sunday 13 April 2025 00:53:44 +0000 (0:00:01.282) 0:09:19.161 ********** 2025-04-13 00:57:27.410526 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.410531 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.410536 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.410541 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.410546 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.410551 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.410556 | orchestrator | 2025-04-13 00:57:27.410560 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-13 00:57:27.410565 | orchestrator | Sunday 13 April 2025 00:53:46 +0000 (0:00:01.226) 0:09:20.388 ********** 2025-04-13 00:57:27.410570 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.410575 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.410580 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.410584 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.410589 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.410594 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.410599 | orchestrator | 2025-04-13 00:57:27.410604 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-13 00:57:27.410611 | orchestrator | Sunday 13 April 2025 00:53:47 +0000 (0:00:01.023) 0:09:21.411 ********** 2025-04-13 00:57:27.410616 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.410621 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.410625 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.410630 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.410635 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.410640 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.410645 | orchestrator | 2025-04-13 00:57:27.410649 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-13 00:57:27.410657 | orchestrator | Sunday 13 April 2025 00:53:48 +0000 (0:00:00.933) 0:09:22.345 ********** 2025-04-13 00:57:27.410662 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.410667 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.410672 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.410676 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.410681 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.410686 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.410691 | orchestrator | 2025-04-13 00:57:27.410696 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-13 00:57:27.410701 | orchestrator | Sunday 13 April 2025 00:53:48 +0000 (0:00:00.641) 0:09:22.986 ********** 2025-04-13 00:57:27.410705 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.410710 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.410715 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.410720 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.410738 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.410744 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.410749 | orchestrator | 2025-04-13 00:57:27.410754 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-13 00:57:27.410759 | orchestrator | Sunday 13 April 2025 00:53:49 +0000 (0:00:00.853) 0:09:23.840 ********** 2025-04-13 00:57:27.410763 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.410772 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.410777 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.410781 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.410786 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.410791 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.410796 | orchestrator | 2025-04-13 00:57:27.410801 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-13 00:57:27.410806 | orchestrator | Sunday 13 April 2025 00:53:50 +0000 (0:00:00.649) 0:09:24.490 ********** 2025-04-13 00:57:27.410810 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.410815 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.410820 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.410825 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.410830 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.410834 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.410839 | orchestrator | 2025-04-13 00:57:27.410844 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-13 00:57:27.410849 | orchestrator | Sunday 13 April 2025 00:53:51 +0000 (0:00:00.966) 0:09:25.456 ********** 2025-04-13 00:57:27.410853 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.410858 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.410863 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.410868 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.410873 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.410878 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.410882 | orchestrator | 2025-04-13 00:57:27.410887 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-13 00:57:27.410892 | orchestrator | Sunday 13 April 2025 00:53:51 +0000 (0:00:00.646) 0:09:26.102 ********** 2025-04-13 00:57:27.410897 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.410902 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.410906 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.410911 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.410916 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.410921 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.410925 | orchestrator | 2025-04-13 00:57:27.410930 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-13 00:57:27.410935 | orchestrator | Sunday 13 April 2025 00:53:53 +0000 (0:00:01.274) 0:09:27.376 ********** 2025-04-13 00:57:27.410940 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.410945 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.410952 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.410957 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.410962 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.410967 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.410972 | orchestrator | 2025-04-13 00:57:27.410977 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-13 00:57:27.410981 | orchestrator | Sunday 13 April 2025 00:53:53 +0000 (0:00:00.670) 0:09:28.047 ********** 2025-04-13 00:57:27.410986 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.410991 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.410996 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.411001 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.411005 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.411010 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.411015 | orchestrator | 2025-04-13 00:57:27.411020 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-13 00:57:27.411025 | orchestrator | Sunday 13 April 2025 00:53:54 +0000 (0:00:00.849) 0:09:28.896 ********** 2025-04-13 00:57:27.411029 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.411034 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.411039 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.411044 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.411048 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.411053 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.411058 | orchestrator | 2025-04-13 00:57:27.411063 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-13 00:57:27.411067 | orchestrator | Sunday 13 April 2025 00:53:55 +0000 (0:00:00.634) 0:09:29.530 ********** 2025-04-13 00:57:27.411072 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.411077 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.411082 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.411087 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.411091 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.411096 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.411101 | orchestrator | 2025-04-13 00:57:27.411106 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-13 00:57:27.411111 | orchestrator | Sunday 13 April 2025 00:53:56 +0000 (0:00:00.917) 0:09:30.448 ********** 2025-04-13 00:57:27.411115 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.411120 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.411125 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.411130 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.411135 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.411153 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.411158 | orchestrator | 2025-04-13 00:57:27.411163 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-13 00:57:27.411168 | orchestrator | Sunday 13 April 2025 00:53:56 +0000 (0:00:00.653) 0:09:31.102 ********** 2025-04-13 00:57:27.411172 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.411177 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.411182 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.411187 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.411192 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.411197 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.411201 | orchestrator | 2025-04-13 00:57:27.411206 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-13 00:57:27.411211 | orchestrator | Sunday 13 April 2025 00:53:57 +0000 (0:00:00.870) 0:09:31.972 ********** 2025-04-13 00:57:27.411216 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.411233 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.411239 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.411244 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.411248 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.411253 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.411263 | orchestrator | 2025-04-13 00:57:27.411268 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-13 00:57:27.411275 | orchestrator | Sunday 13 April 2025 00:53:58 +0000 (0:00:00.664) 0:09:32.636 ********** 2025-04-13 00:57:27.411281 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.411286 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.411290 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.411295 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.411300 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.411305 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.411310 | orchestrator | 2025-04-13 00:57:27.411314 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-13 00:57:27.411319 | orchestrator | Sunday 13 April 2025 00:53:59 +0000 (0:00:00.879) 0:09:33.516 ********** 2025-04-13 00:57:27.411324 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.411329 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.411334 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.411338 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.411343 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.411348 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.411352 | orchestrator | 2025-04-13 00:57:27.411357 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-13 00:57:27.411364 | orchestrator | Sunday 13 April 2025 00:53:59 +0000 (0:00:00.656) 0:09:34.172 ********** 2025-04-13 00:57:27.411369 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.411374 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.411379 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.411384 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.411389 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.411393 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.411398 | orchestrator | 2025-04-13 00:57:27.411403 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-13 00:57:27.411408 | orchestrator | Sunday 13 April 2025 00:54:00 +0000 (0:00:00.896) 0:09:35.069 ********** 2025-04-13 00:57:27.411413 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.411418 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.411422 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.411427 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.411432 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.411437 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.411442 | orchestrator | 2025-04-13 00:57:27.411447 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-13 00:57:27.411451 | orchestrator | Sunday 13 April 2025 00:54:01 +0000 (0:00:00.702) 0:09:35.772 ********** 2025-04-13 00:57:27.411456 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.411461 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.411466 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.411471 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.411476 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.411480 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.411485 | orchestrator | 2025-04-13 00:57:27.411490 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-13 00:57:27.411495 | orchestrator | Sunday 13 April 2025 00:54:02 +0000 (0:00:01.128) 0:09:36.901 ********** 2025-04-13 00:57:27.411500 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.411504 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.411509 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.411514 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.411519 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.411524 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.411528 | orchestrator | 2025-04-13 00:57:27.411533 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-13 00:57:27.411538 | orchestrator | Sunday 13 April 2025 00:54:03 +0000 (0:00:00.692) 0:09:37.593 ********** 2025-04-13 00:57:27.411548 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.411555 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.411560 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.411565 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.411570 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.411575 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.411579 | orchestrator | 2025-04-13 00:57:27.411584 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-13 00:57:27.411589 | orchestrator | Sunday 13 April 2025 00:54:04 +0000 (0:00:00.888) 0:09:38.482 ********** 2025-04-13 00:57:27.411594 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.411599 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.411604 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.411609 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.411613 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.411618 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.411623 | orchestrator | 2025-04-13 00:57:27.411628 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-13 00:57:27.411633 | orchestrator | Sunday 13 April 2025 00:54:04 +0000 (0:00:00.632) 0:09:39.114 ********** 2025-04-13 00:57:27.411637 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.411642 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.411647 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.411652 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.411657 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.411662 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.411666 | orchestrator | 2025-04-13 00:57:27.411671 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-13 00:57:27.411676 | orchestrator | Sunday 13 April 2025 00:54:05 +0000 (0:00:00.910) 0:09:40.024 ********** 2025-04-13 00:57:27.411681 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.411686 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.411690 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.411695 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.411700 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.411705 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.411710 | orchestrator | 2025-04-13 00:57:27.411726 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-13 00:57:27.411732 | orchestrator | Sunday 13 April 2025 00:54:06 +0000 (0:00:00.676) 0:09:40.701 ********** 2025-04-13 00:57:27.411737 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.411742 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.411747 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.411752 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.411756 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.411761 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.411766 | orchestrator | 2025-04-13 00:57:27.411771 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-13 00:57:27.411776 | orchestrator | Sunday 13 April 2025 00:54:07 +0000 (0:00:00.886) 0:09:41.587 ********** 2025-04-13 00:57:27.411781 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.411785 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.411790 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.411795 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.411800 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.411805 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.411809 | orchestrator | 2025-04-13 00:57:27.411814 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-13 00:57:27.411819 | orchestrator | Sunday 13 April 2025 00:54:07 +0000 (0:00:00.672) 0:09:42.260 ********** 2025-04-13 00:57:27.411824 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.411832 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.411837 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.411842 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.411847 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.411852 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.411856 | orchestrator | 2025-04-13 00:57:27.411861 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-13 00:57:27.411866 | orchestrator | Sunday 13 April 2025 00:54:08 +0000 (0:00:00.860) 0:09:43.120 ********** 2025-04-13 00:57:27.411871 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.411875 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.411880 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.411885 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.411890 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.411895 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.411900 | orchestrator | 2025-04-13 00:57:27.411904 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-13 00:57:27.411909 | orchestrator | Sunday 13 April 2025 00:54:09 +0000 (0:00:00.657) 0:09:43.777 ********** 2025-04-13 00:57:27.411914 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-13 00:57:27.411919 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-13 00:57:27.411924 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.411929 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-13 00:57:27.411933 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-13 00:57:27.411938 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.411943 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-13 00:57:27.411948 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-13 00:57:27.411953 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.411958 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-13 00:57:27.411962 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-13 00:57:27.411967 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.411974 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-13 00:57:27.411979 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-13 00:57:27.411984 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.411989 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-13 00:57:27.411994 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-13 00:57:27.411999 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.412004 | orchestrator | 2025-04-13 00:57:27.412009 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-13 00:57:27.412014 | orchestrator | Sunday 13 April 2025 00:54:10 +0000 (0:00:00.956) 0:09:44.734 ********** 2025-04-13 00:57:27.412018 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-13 00:57:27.412025 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-13 00:57:27.412030 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.412035 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-13 00:57:27.412040 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-13 00:57:27.412045 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.412050 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-13 00:57:27.412054 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-13 00:57:27.412059 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.412064 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-13 00:57:27.412069 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-13 00:57:27.412073 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.412078 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-13 00:57:27.412083 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-13 00:57:27.412091 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.412096 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-13 00:57:27.412101 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-13 00:57:27.412106 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.412111 | orchestrator | 2025-04-13 00:57:27.412115 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-13 00:57:27.412120 | orchestrator | Sunday 13 April 2025 00:54:11 +0000 (0:00:00.713) 0:09:45.448 ********** 2025-04-13 00:57:27.412125 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.412130 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.412134 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.412163 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.412180 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.412186 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.412191 | orchestrator | 2025-04-13 00:57:27.412196 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-13 00:57:27.412201 | orchestrator | Sunday 13 April 2025 00:54:12 +0000 (0:00:00.928) 0:09:46.376 ********** 2025-04-13 00:57:27.412205 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.412210 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.412215 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.412220 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.412225 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.412229 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.412234 | orchestrator | 2025-04-13 00:57:27.412239 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-13 00:57:27.412244 | orchestrator | Sunday 13 April 2025 00:54:12 +0000 (0:00:00.640) 0:09:47.016 ********** 2025-04-13 00:57:27.412249 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.412253 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.412258 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.412263 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.412268 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.412273 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.412277 | orchestrator | 2025-04-13 00:57:27.412282 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-13 00:57:27.412287 | orchestrator | Sunday 13 April 2025 00:54:13 +0000 (0:00:00.888) 0:09:47.905 ********** 2025-04-13 00:57:27.412292 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.412297 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.412301 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.412306 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.412311 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.412316 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.412320 | orchestrator | 2025-04-13 00:57:27.412325 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-13 00:57:27.412330 | orchestrator | Sunday 13 April 2025 00:54:14 +0000 (0:00:00.686) 0:09:48.592 ********** 2025-04-13 00:57:27.412335 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.412340 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.412344 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.412349 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.412354 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.412359 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.412364 | orchestrator | 2025-04-13 00:57:27.412371 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-13 00:57:27.412376 | orchestrator | Sunday 13 April 2025 00:54:15 +0000 (0:00:00.918) 0:09:49.510 ********** 2025-04-13 00:57:27.412381 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.412386 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.412390 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.412398 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.412403 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.412408 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.412413 | orchestrator | 2025-04-13 00:57:27.412418 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-13 00:57:27.412422 | orchestrator | Sunday 13 April 2025 00:54:15 +0000 (0:00:00.704) 0:09:50.215 ********** 2025-04-13 00:57:27.412427 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-13 00:57:27.412432 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-13 00:57:27.412437 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-13 00:57:27.412442 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.412447 | orchestrator | 2025-04-13 00:57:27.412451 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-13 00:57:27.412456 | orchestrator | Sunday 13 April 2025 00:54:16 +0000 (0:00:00.425) 0:09:50.641 ********** 2025-04-13 00:57:27.412461 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-13 00:57:27.412466 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-13 00:57:27.412471 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-13 00:57:27.412476 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.412481 | orchestrator | 2025-04-13 00:57:27.412485 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-13 00:57:27.412490 | orchestrator | Sunday 13 April 2025 00:54:16 +0000 (0:00:00.449) 0:09:51.090 ********** 2025-04-13 00:57:27.412495 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-13 00:57:27.412500 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-13 00:57:27.412505 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-13 00:57:27.412509 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.412514 | orchestrator | 2025-04-13 00:57:27.412519 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-13 00:57:27.412524 | orchestrator | Sunday 13 April 2025 00:54:17 +0000 (0:00:00.729) 0:09:51.820 ********** 2025-04-13 00:57:27.412528 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.412533 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.412538 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.412543 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.412550 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.412555 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.412560 | orchestrator | 2025-04-13 00:57:27.412565 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-13 00:57:27.412569 | orchestrator | Sunday 13 April 2025 00:54:18 +0000 (0:00:00.902) 0:09:52.723 ********** 2025-04-13 00:57:27.412574 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-13 00:57:27.412579 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.412584 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-13 00:57:27.412589 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.412593 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-13 00:57:27.412611 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-13 00:57:27.412617 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.412621 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.412626 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-13 00:57:27.412631 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.412636 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-13 00:57:27.412641 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.412645 | orchestrator | 2025-04-13 00:57:27.412650 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-13 00:57:27.412655 | orchestrator | Sunday 13 April 2025 00:54:19 +0000 (0:00:00.855) 0:09:53.579 ********** 2025-04-13 00:57:27.412660 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.412665 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.412672 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.412677 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.412682 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.412686 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.412691 | orchestrator | 2025-04-13 00:57:27.412696 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-13 00:57:27.412701 | orchestrator | Sunday 13 April 2025 00:54:20 +0000 (0:00:00.958) 0:09:54.538 ********** 2025-04-13 00:57:27.412705 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.412710 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.412715 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.412720 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.412724 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.412729 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.412734 | orchestrator | 2025-04-13 00:57:27.412739 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-13 00:57:27.412743 | orchestrator | Sunday 13 April 2025 00:54:20 +0000 (0:00:00.616) 0:09:55.154 ********** 2025-04-13 00:57:27.412748 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-13 00:57:27.412753 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.412758 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-13 00:57:27.412763 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.412767 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-13 00:57:27.412772 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.412777 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-13 00:57:27.412782 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.412787 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-13 00:57:27.412791 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.412796 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-13 00:57:27.412801 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.412805 | orchestrator | 2025-04-13 00:57:27.412810 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-13 00:57:27.412815 | orchestrator | Sunday 13 April 2025 00:54:21 +0000 (0:00:01.104) 0:09:56.259 ********** 2025-04-13 00:57:27.412820 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.412825 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.412829 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.412834 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-13 00:57:27.412839 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.412844 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-13 00:57:27.412849 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.412854 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-13 00:57:27.412858 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.412863 | orchestrator | 2025-04-13 00:57:27.412868 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-13 00:57:27.412873 | orchestrator | Sunday 13 April 2025 00:54:22 +0000 (0:00:00.967) 0:09:57.226 ********** 2025-04-13 00:57:27.412878 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-13 00:57:27.412882 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-13 00:57:27.412887 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-13 00:57:27.412892 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-13 00:57:27.412897 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-13 00:57:27.412902 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-13 00:57:27.412906 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.412914 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-13 00:57:27.412919 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-13 00:57:27.412923 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-13 00:57:27.412928 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.412933 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.412938 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.412942 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.412947 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.412952 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-13 00:57:27.412957 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-13 00:57:27.412961 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-13 00:57:27.412966 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.412971 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.412976 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-13 00:57:27.412983 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-13 00:57:27.412988 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-13 00:57:27.412992 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.412999 | orchestrator | 2025-04-13 00:57:27.413004 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-13 00:57:27.413009 | orchestrator | Sunday 13 April 2025 00:54:24 +0000 (0:00:01.432) 0:09:58.658 ********** 2025-04-13 00:57:27.413014 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.413018 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.413023 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.413028 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.413033 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.413038 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.413042 | orchestrator | 2025-04-13 00:57:27.413047 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-13 00:57:27.413052 | orchestrator | Sunday 13 April 2025 00:54:25 +0000 (0:00:01.408) 0:10:00.067 ********** 2025-04-13 00:57:27.413057 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.413062 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.413066 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.413071 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-13 00:57:27.413076 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.413081 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-13 00:57:27.413085 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.413090 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-13 00:57:27.413095 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.413100 | orchestrator | 2025-04-13 00:57:27.413104 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-13 00:57:27.413109 | orchestrator | Sunday 13 April 2025 00:54:27 +0000 (0:00:01.466) 0:10:01.534 ********** 2025-04-13 00:57:27.413114 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.413119 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.413160 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.413165 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.413170 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.413175 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.413180 | orchestrator | 2025-04-13 00:57:27.413184 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-13 00:57:27.413189 | orchestrator | Sunday 13 April 2025 00:54:28 +0000 (0:00:01.417) 0:10:02.952 ********** 2025-04-13 00:57:27.413194 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:27.413199 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:27.413207 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:27.413212 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.413216 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.413221 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.413226 | orchestrator | 2025-04-13 00:57:27.413231 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-04-13 00:57:27.413236 | orchestrator | Sunday 13 April 2025 00:54:29 +0000 (0:00:01.320) 0:10:04.272 ********** 2025-04-13 00:57:27.413240 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.413245 | orchestrator | 2025-04-13 00:57:27.413253 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-04-13 00:57:27.413258 | orchestrator | Sunday 13 April 2025 00:54:33 +0000 (0:00:03.303) 0:10:07.576 ********** 2025-04-13 00:57:27.413262 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.413267 | orchestrator | 2025-04-13 00:57:27.413272 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-04-13 00:57:27.413277 | orchestrator | Sunday 13 April 2025 00:54:34 +0000 (0:00:01.686) 0:10:09.262 ********** 2025-04-13 00:57:27.413282 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.413286 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.413291 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.413296 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.413301 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.413306 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.413310 | orchestrator | 2025-04-13 00:57:27.413315 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-04-13 00:57:27.413320 | orchestrator | Sunday 13 April 2025 00:54:36 +0000 (0:00:01.798) 0:10:11.061 ********** 2025-04-13 00:57:27.413325 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.413330 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.413335 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.413340 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.413344 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.413349 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.413354 | orchestrator | 2025-04-13 00:57:27.413359 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-04-13 00:57:27.413364 | orchestrator | Sunday 13 April 2025 00:54:37 +0000 (0:00:01.077) 0:10:12.138 ********** 2025-04-13 00:57:27.413369 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.413374 | orchestrator | 2025-04-13 00:57:27.413379 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-04-13 00:57:27.413384 | orchestrator | Sunday 13 April 2025 00:54:39 +0000 (0:00:01.606) 0:10:13.744 ********** 2025-04-13 00:57:27.413389 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.413394 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.413398 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.413403 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.413408 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.413413 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.413418 | orchestrator | 2025-04-13 00:57:27.413422 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-04-13 00:57:27.413427 | orchestrator | Sunday 13 April 2025 00:54:41 +0000 (0:00:02.096) 0:10:15.841 ********** 2025-04-13 00:57:27.413432 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.413437 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.413442 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.413446 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.413451 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.413456 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.413461 | orchestrator | 2025-04-13 00:57:27.413466 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-04-13 00:57:27.413474 | orchestrator | Sunday 13 April 2025 00:54:45 +0000 (0:00:04.045) 0:10:19.886 ********** 2025-04-13 00:57:27.413483 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.413488 | orchestrator | 2025-04-13 00:57:27.413493 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-04-13 00:57:27.413498 | orchestrator | Sunday 13 April 2025 00:54:46 +0000 (0:00:01.120) 0:10:21.007 ********** 2025-04-13 00:57:27.413502 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.413507 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.413512 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.413517 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.413522 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.413526 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.413531 | orchestrator | 2025-04-13 00:57:27.413536 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-04-13 00:57:27.413541 | orchestrator | Sunday 13 April 2025 00:54:47 +0000 (0:00:00.517) 0:10:21.524 ********** 2025-04-13 00:57:27.413545 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:27.413550 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:27.413555 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.413560 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:27.413564 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.413569 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.413574 | orchestrator | 2025-04-13 00:57:27.413579 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-04-13 00:57:27.413584 | orchestrator | Sunday 13 April 2025 00:54:49 +0000 (0:00:02.740) 0:10:24.264 ********** 2025-04-13 00:57:27.413589 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:27.413593 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:27.413601 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:27.413606 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.413610 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.413615 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.413620 | orchestrator | 2025-04-13 00:57:27.413625 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-04-13 00:57:27.413630 | orchestrator | 2025-04-13 00:57:27.413634 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-13 00:57:27.413639 | orchestrator | Sunday 13 April 2025 00:54:53 +0000 (0:00:03.757) 0:10:28.022 ********** 2025-04-13 00:57:27.413644 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.413651 | orchestrator | 2025-04-13 00:57:27.413656 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-13 00:57:27.413661 | orchestrator | Sunday 13 April 2025 00:54:54 +0000 (0:00:00.844) 0:10:28.867 ********** 2025-04-13 00:57:27.413666 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.413671 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.413676 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.413681 | orchestrator | 2025-04-13 00:57:27.413685 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-13 00:57:27.413690 | orchestrator | Sunday 13 April 2025 00:54:54 +0000 (0:00:00.330) 0:10:29.198 ********** 2025-04-13 00:57:27.413695 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.413700 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.413704 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.413709 | orchestrator | 2025-04-13 00:57:27.413714 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-13 00:57:27.413719 | orchestrator | Sunday 13 April 2025 00:54:55 +0000 (0:00:00.787) 0:10:29.985 ********** 2025-04-13 00:57:27.413723 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.413728 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.413733 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.413738 | orchestrator | 2025-04-13 00:57:27.413743 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-13 00:57:27.413751 | orchestrator | Sunday 13 April 2025 00:54:56 +0000 (0:00:00.902) 0:10:30.887 ********** 2025-04-13 00:57:27.413756 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.413760 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.413765 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.413770 | orchestrator | 2025-04-13 00:57:27.413777 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-13 00:57:27.413782 | orchestrator | Sunday 13 April 2025 00:54:57 +0000 (0:00:01.268) 0:10:32.156 ********** 2025-04-13 00:57:27.413787 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.413792 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.413797 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.413801 | orchestrator | 2025-04-13 00:57:27.413806 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-13 00:57:27.413811 | orchestrator | Sunday 13 April 2025 00:54:58 +0000 (0:00:00.317) 0:10:32.474 ********** 2025-04-13 00:57:27.413816 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.413820 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.413825 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.413830 | orchestrator | 2025-04-13 00:57:27.413835 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-13 00:57:27.413839 | orchestrator | Sunday 13 April 2025 00:54:58 +0000 (0:00:00.356) 0:10:32.830 ********** 2025-04-13 00:57:27.413844 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.413849 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.413854 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.413859 | orchestrator | 2025-04-13 00:57:27.413863 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-13 00:57:27.413868 | orchestrator | Sunday 13 April 2025 00:54:58 +0000 (0:00:00.364) 0:10:33.195 ********** 2025-04-13 00:57:27.413873 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.413878 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.413882 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.413887 | orchestrator | 2025-04-13 00:57:27.413892 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-13 00:57:27.413897 | orchestrator | Sunday 13 April 2025 00:54:59 +0000 (0:00:00.878) 0:10:34.074 ********** 2025-04-13 00:57:27.413904 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.413911 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.413916 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.413921 | orchestrator | 2025-04-13 00:57:27.413926 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-13 00:57:27.413931 | orchestrator | Sunday 13 April 2025 00:55:00 +0000 (0:00:00.324) 0:10:34.398 ********** 2025-04-13 00:57:27.413936 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.413940 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.413945 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.413950 | orchestrator | 2025-04-13 00:57:27.413955 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-13 00:57:27.413960 | orchestrator | Sunday 13 April 2025 00:55:00 +0000 (0:00:00.314) 0:10:34.713 ********** 2025-04-13 00:57:27.413965 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.413969 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.413974 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.413979 | orchestrator | 2025-04-13 00:57:27.413984 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-13 00:57:27.413988 | orchestrator | Sunday 13 April 2025 00:55:01 +0000 (0:00:00.734) 0:10:35.448 ********** 2025-04-13 00:57:27.413993 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.413998 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414003 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414007 | orchestrator | 2025-04-13 00:57:27.414035 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-13 00:57:27.414041 | orchestrator | Sunday 13 April 2025 00:55:01 +0000 (0:00:00.603) 0:10:36.051 ********** 2025-04-13 00:57:27.414049 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414054 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414059 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414064 | orchestrator | 2025-04-13 00:57:27.414069 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-13 00:57:27.414073 | orchestrator | Sunday 13 April 2025 00:55:02 +0000 (0:00:00.351) 0:10:36.403 ********** 2025-04-13 00:57:27.414078 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.414083 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.414088 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.414092 | orchestrator | 2025-04-13 00:57:27.414097 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-13 00:57:27.414102 | orchestrator | Sunday 13 April 2025 00:55:02 +0000 (0:00:00.341) 0:10:36.744 ********** 2025-04-13 00:57:27.414107 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.414112 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.414116 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.414122 | orchestrator | 2025-04-13 00:57:27.414126 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-13 00:57:27.414131 | orchestrator | Sunday 13 April 2025 00:55:02 +0000 (0:00:00.348) 0:10:37.093 ********** 2025-04-13 00:57:27.414136 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.414163 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.414171 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.414176 | orchestrator | 2025-04-13 00:57:27.414181 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-13 00:57:27.414186 | orchestrator | Sunday 13 April 2025 00:55:03 +0000 (0:00:00.609) 0:10:37.703 ********** 2025-04-13 00:57:27.414191 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414195 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414200 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414205 | orchestrator | 2025-04-13 00:57:27.414210 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-13 00:57:27.414215 | orchestrator | Sunday 13 April 2025 00:55:03 +0000 (0:00:00.334) 0:10:38.037 ********** 2025-04-13 00:57:27.414220 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414224 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414229 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414234 | orchestrator | 2025-04-13 00:57:27.414239 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-13 00:57:27.414243 | orchestrator | Sunday 13 April 2025 00:55:04 +0000 (0:00:00.310) 0:10:38.348 ********** 2025-04-13 00:57:27.414248 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414253 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414258 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414263 | orchestrator | 2025-04-13 00:57:27.414267 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-13 00:57:27.414272 | orchestrator | Sunday 13 April 2025 00:55:04 +0000 (0:00:00.327) 0:10:38.676 ********** 2025-04-13 00:57:27.414277 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.414282 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.414287 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.414292 | orchestrator | 2025-04-13 00:57:27.414299 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-13 00:57:27.414304 | orchestrator | Sunday 13 April 2025 00:55:05 +0000 (0:00:00.694) 0:10:39.370 ********** 2025-04-13 00:57:27.414309 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414313 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414318 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414323 | orchestrator | 2025-04-13 00:57:27.414328 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-13 00:57:27.414333 | orchestrator | Sunday 13 April 2025 00:55:05 +0000 (0:00:00.325) 0:10:39.696 ********** 2025-04-13 00:57:27.414337 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414345 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414349 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414354 | orchestrator | 2025-04-13 00:57:27.414359 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-13 00:57:27.414364 | orchestrator | Sunday 13 April 2025 00:55:05 +0000 (0:00:00.311) 0:10:40.008 ********** 2025-04-13 00:57:27.414369 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414373 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414378 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414383 | orchestrator | 2025-04-13 00:57:27.414388 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-13 00:57:27.414392 | orchestrator | Sunday 13 April 2025 00:55:06 +0000 (0:00:00.336) 0:10:40.344 ********** 2025-04-13 00:57:27.414397 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414402 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414410 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414415 | orchestrator | 2025-04-13 00:57:27.414420 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-13 00:57:27.414424 | orchestrator | Sunday 13 April 2025 00:55:06 +0000 (0:00:00.630) 0:10:40.975 ********** 2025-04-13 00:57:27.414429 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414434 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414439 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414444 | orchestrator | 2025-04-13 00:57:27.414448 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-13 00:57:27.414453 | orchestrator | Sunday 13 April 2025 00:55:07 +0000 (0:00:00.334) 0:10:41.309 ********** 2025-04-13 00:57:27.414458 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414463 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414468 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414472 | orchestrator | 2025-04-13 00:57:27.414477 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-13 00:57:27.414482 | orchestrator | Sunday 13 April 2025 00:55:07 +0000 (0:00:00.305) 0:10:41.614 ********** 2025-04-13 00:57:27.414487 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414492 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414496 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414501 | orchestrator | 2025-04-13 00:57:27.414506 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-13 00:57:27.414511 | orchestrator | Sunday 13 April 2025 00:55:07 +0000 (0:00:00.332) 0:10:41.947 ********** 2025-04-13 00:57:27.414516 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414520 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414525 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414530 | orchestrator | 2025-04-13 00:57:27.414535 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-13 00:57:27.414540 | orchestrator | Sunday 13 April 2025 00:55:08 +0000 (0:00:00.622) 0:10:42.570 ********** 2025-04-13 00:57:27.414544 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414549 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414554 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414559 | orchestrator | 2025-04-13 00:57:27.414563 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-13 00:57:27.414568 | orchestrator | Sunday 13 April 2025 00:55:08 +0000 (0:00:00.347) 0:10:42.917 ********** 2025-04-13 00:57:27.414573 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414578 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414583 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414587 | orchestrator | 2025-04-13 00:57:27.414592 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-13 00:57:27.414597 | orchestrator | Sunday 13 April 2025 00:55:08 +0000 (0:00:00.349) 0:10:43.267 ********** 2025-04-13 00:57:27.414607 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414612 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414617 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414622 | orchestrator | 2025-04-13 00:57:27.414627 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-13 00:57:27.414631 | orchestrator | Sunday 13 April 2025 00:55:09 +0000 (0:00:00.325) 0:10:43.592 ********** 2025-04-13 00:57:27.414636 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414641 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414646 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414651 | orchestrator | 2025-04-13 00:57:27.414655 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-13 00:57:27.414660 | orchestrator | Sunday 13 April 2025 00:55:09 +0000 (0:00:00.642) 0:10:44.235 ********** 2025-04-13 00:57:27.414665 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-13 00:57:27.414670 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-13 00:57:27.414675 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414680 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-13 00:57:27.414684 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-13 00:57:27.414689 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414697 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-13 00:57:27.414705 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-13 00:57:27.414710 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414715 | orchestrator | 2025-04-13 00:57:27.414720 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-13 00:57:27.414725 | orchestrator | Sunday 13 April 2025 00:55:10 +0000 (0:00:00.419) 0:10:44.654 ********** 2025-04-13 00:57:27.414730 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-13 00:57:27.414734 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-13 00:57:27.414739 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414744 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-13 00:57:27.414749 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-13 00:57:27.414754 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414759 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-13 00:57:27.414763 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-13 00:57:27.414768 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414773 | orchestrator | 2025-04-13 00:57:27.414778 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-13 00:57:27.414782 | orchestrator | Sunday 13 April 2025 00:55:10 +0000 (0:00:00.358) 0:10:45.012 ********** 2025-04-13 00:57:27.414787 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414792 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414797 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414802 | orchestrator | 2025-04-13 00:57:27.414806 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-13 00:57:27.414811 | orchestrator | Sunday 13 April 2025 00:55:11 +0000 (0:00:00.355) 0:10:45.367 ********** 2025-04-13 00:57:27.414816 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414823 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414828 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414833 | orchestrator | 2025-04-13 00:57:27.414838 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-13 00:57:27.414843 | orchestrator | Sunday 13 April 2025 00:55:11 +0000 (0:00:00.681) 0:10:46.049 ********** 2025-04-13 00:57:27.414847 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414852 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414857 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414862 | orchestrator | 2025-04-13 00:57:27.414870 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-13 00:57:27.414875 | orchestrator | Sunday 13 April 2025 00:55:12 +0000 (0:00:00.358) 0:10:46.408 ********** 2025-04-13 00:57:27.414880 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414885 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414890 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414894 | orchestrator | 2025-04-13 00:57:27.414899 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-13 00:57:27.414907 | orchestrator | Sunday 13 April 2025 00:55:12 +0000 (0:00:00.329) 0:10:46.737 ********** 2025-04-13 00:57:27.414912 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414916 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414921 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414926 | orchestrator | 2025-04-13 00:57:27.414931 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-13 00:57:27.414936 | orchestrator | Sunday 13 April 2025 00:55:12 +0000 (0:00:00.343) 0:10:47.081 ********** 2025-04-13 00:57:27.414940 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414945 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.414950 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.414955 | orchestrator | 2025-04-13 00:57:27.414959 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-13 00:57:27.414964 | orchestrator | Sunday 13 April 2025 00:55:13 +0000 (0:00:00.622) 0:10:47.703 ********** 2025-04-13 00:57:27.414969 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.414974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.414978 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.414983 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.414988 | orchestrator | 2025-04-13 00:57:27.414993 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-13 00:57:27.414998 | orchestrator | Sunday 13 April 2025 00:55:13 +0000 (0:00:00.416) 0:10:48.119 ********** 2025-04-13 00:57:27.415002 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.415007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.415012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.415017 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.415021 | orchestrator | 2025-04-13 00:57:27.415026 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-13 00:57:27.415031 | orchestrator | Sunday 13 April 2025 00:55:14 +0000 (0:00:00.444) 0:10:48.564 ********** 2025-04-13 00:57:27.415036 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.415040 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.415045 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.415050 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.415055 | orchestrator | 2025-04-13 00:57:27.415060 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-13 00:57:27.415064 | orchestrator | Sunday 13 April 2025 00:55:14 +0000 (0:00:00.467) 0:10:49.032 ********** 2025-04-13 00:57:27.415069 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.415074 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.415079 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.415083 | orchestrator | 2025-04-13 00:57:27.415088 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-13 00:57:27.415093 | orchestrator | Sunday 13 April 2025 00:55:15 +0000 (0:00:00.325) 0:10:49.357 ********** 2025-04-13 00:57:27.415098 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-13 00:57:27.415102 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.415107 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-13 00:57:27.415112 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.415121 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-13 00:57:27.415126 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.415130 | orchestrator | 2025-04-13 00:57:27.415135 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-13 00:57:27.415150 | orchestrator | Sunday 13 April 2025 00:55:16 +0000 (0:00:01.073) 0:10:50.431 ********** 2025-04-13 00:57:27.415155 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.415160 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.415165 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.415170 | orchestrator | 2025-04-13 00:57:27.415175 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-13 00:57:27.415179 | orchestrator | Sunday 13 April 2025 00:55:16 +0000 (0:00:00.331) 0:10:50.762 ********** 2025-04-13 00:57:27.415184 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.415189 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.415194 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.415198 | orchestrator | 2025-04-13 00:57:27.415203 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-13 00:57:27.415208 | orchestrator | Sunday 13 April 2025 00:55:16 +0000 (0:00:00.330) 0:10:51.093 ********** 2025-04-13 00:57:27.415213 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-13 00:57:27.415217 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.415222 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-13 00:57:27.415227 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.415232 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-13 00:57:27.415237 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.415241 | orchestrator | 2025-04-13 00:57:27.415249 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-13 00:57:27.415253 | orchestrator | Sunday 13 April 2025 00:55:17 +0000 (0:00:00.432) 0:10:51.526 ********** 2025-04-13 00:57:27.415258 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-13 00:57:27.415263 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.415268 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-13 00:57:27.415273 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.415278 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-13 00:57:27.415282 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.415287 | orchestrator | 2025-04-13 00:57:27.415292 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-13 00:57:27.415297 | orchestrator | Sunday 13 April 2025 00:55:17 +0000 (0:00:00.645) 0:10:52.171 ********** 2025-04-13 00:57:27.415301 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.415306 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.415311 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.415316 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-13 00:57:27.415321 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-13 00:57:27.415325 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-13 00:57:27.415330 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.415335 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.415340 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-13 00:57:27.415345 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-13 00:57:27.415349 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-13 00:57:27.415354 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.415359 | orchestrator | 2025-04-13 00:57:27.415364 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-13 00:57:27.415372 | orchestrator | Sunday 13 April 2025 00:55:18 +0000 (0:00:00.626) 0:10:52.798 ********** 2025-04-13 00:57:27.415377 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.415382 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.415387 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.415391 | orchestrator | 2025-04-13 00:57:27.415396 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-13 00:57:27.415401 | orchestrator | Sunday 13 April 2025 00:55:19 +0000 (0:00:00.854) 0:10:53.653 ********** 2025-04-13 00:57:27.415406 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-13 00:57:27.415411 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.415415 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-13 00:57:27.415420 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.415425 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-13 00:57:27.415430 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.415435 | orchestrator | 2025-04-13 00:57:27.415440 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-13 00:57:27.415444 | orchestrator | Sunday 13 April 2025 00:55:19 +0000 (0:00:00.627) 0:10:54.280 ********** 2025-04-13 00:57:27.415449 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.415454 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.415459 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.415464 | orchestrator | 2025-04-13 00:57:27.415469 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-13 00:57:27.415473 | orchestrator | Sunday 13 April 2025 00:55:20 +0000 (0:00:00.892) 0:10:55.173 ********** 2025-04-13 00:57:27.415478 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.415483 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.415488 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.415493 | orchestrator | 2025-04-13 00:57:27.415497 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-04-13 00:57:27.415505 | orchestrator | Sunday 13 April 2025 00:55:21 +0000 (0:00:00.661) 0:10:55.835 ********** 2025-04-13 00:57:27.415510 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.415514 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.415519 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-04-13 00:57:27.415524 | orchestrator | 2025-04-13 00:57:27.415529 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-04-13 00:57:27.415534 | orchestrator | Sunday 13 April 2025 00:55:21 +0000 (0:00:00.445) 0:10:56.280 ********** 2025-04-13 00:57:27.415539 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-13 00:57:27.415543 | orchestrator | 2025-04-13 00:57:27.415548 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-04-13 00:57:27.415553 | orchestrator | Sunday 13 April 2025 00:55:24 +0000 (0:00:02.105) 0:10:58.386 ********** 2025-04-13 00:57:27.415559 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-04-13 00:57:27.415565 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.415570 | orchestrator | 2025-04-13 00:57:27.415574 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-04-13 00:57:27.415579 | orchestrator | Sunday 13 April 2025 00:55:24 +0000 (0:00:00.418) 0:10:58.805 ********** 2025-04-13 00:57:27.415586 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-13 00:57:27.415593 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-13 00:57:27.415601 | orchestrator | 2025-04-13 00:57:27.415606 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-04-13 00:57:27.415610 | orchestrator | Sunday 13 April 2025 00:55:31 +0000 (0:00:06.720) 0:11:05.526 ********** 2025-04-13 00:57:27.415615 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-13 00:57:27.415620 | orchestrator | 2025-04-13 00:57:27.415625 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-04-13 00:57:27.415629 | orchestrator | Sunday 13 April 2025 00:55:34 +0000 (0:00:02.892) 0:11:08.418 ********** 2025-04-13 00:57:27.415634 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.415639 | orchestrator | 2025-04-13 00:57:27.415644 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-04-13 00:57:27.415649 | orchestrator | Sunday 13 April 2025 00:55:34 +0000 (0:00:00.783) 0:11:09.201 ********** 2025-04-13 00:57:27.415653 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-04-13 00:57:27.415658 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-04-13 00:57:27.415663 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-04-13 00:57:27.415668 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-04-13 00:57:27.415673 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-04-13 00:57:27.415677 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-04-13 00:57:27.415682 | orchestrator | 2025-04-13 00:57:27.415687 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-04-13 00:57:27.415692 | orchestrator | Sunday 13 April 2025 00:55:35 +0000 (0:00:01.018) 0:11:10.220 ********** 2025-04-13 00:57:27.415697 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-13 00:57:27.415701 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-13 00:57:27.415706 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-13 00:57:27.415711 | orchestrator | 2025-04-13 00:57:27.415716 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-04-13 00:57:27.415721 | orchestrator | Sunday 13 April 2025 00:55:37 +0000 (0:00:01.987) 0:11:12.207 ********** 2025-04-13 00:57:27.415726 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-13 00:57:27.415730 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-13 00:57:27.415735 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.415740 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-13 00:57:27.415745 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-13 00:57:27.415749 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.415754 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-13 00:57:27.415759 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-13 00:57:27.415764 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.415768 | orchestrator | 2025-04-13 00:57:27.415773 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-04-13 00:57:27.415778 | orchestrator | Sunday 13 April 2025 00:55:39 +0000 (0:00:01.182) 0:11:13.390 ********** 2025-04-13 00:57:27.415783 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.415787 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.415792 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.415797 | orchestrator | 2025-04-13 00:57:27.415802 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-04-13 00:57:27.415806 | orchestrator | Sunday 13 April 2025 00:55:39 +0000 (0:00:00.742) 0:11:14.133 ********** 2025-04-13 00:57:27.415811 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.415819 | orchestrator | 2025-04-13 00:57:27.415824 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-04-13 00:57:27.415829 | orchestrator | Sunday 13 April 2025 00:55:40 +0000 (0:00:00.637) 0:11:14.770 ********** 2025-04-13 00:57:27.415834 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.415838 | orchestrator | 2025-04-13 00:57:27.415843 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-04-13 00:57:27.415848 | orchestrator | Sunday 13 April 2025 00:55:41 +0000 (0:00:00.834) 0:11:15.605 ********** 2025-04-13 00:57:27.415853 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.415858 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.415862 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.415867 | orchestrator | 2025-04-13 00:57:27.415872 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-04-13 00:57:27.415877 | orchestrator | Sunday 13 April 2025 00:55:42 +0000 (0:00:01.315) 0:11:16.921 ********** 2025-04-13 00:57:27.415881 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.415886 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.415891 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.415896 | orchestrator | 2025-04-13 00:57:27.415903 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-04-13 00:57:27.415908 | orchestrator | Sunday 13 April 2025 00:55:43 +0000 (0:00:01.158) 0:11:18.079 ********** 2025-04-13 00:57:27.415915 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.415920 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.415924 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.415929 | orchestrator | 2025-04-13 00:57:27.415934 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-04-13 00:57:27.415939 | orchestrator | Sunday 13 April 2025 00:55:45 +0000 (0:00:01.780) 0:11:19.860 ********** 2025-04-13 00:57:27.415944 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.415948 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.415953 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.415958 | orchestrator | 2025-04-13 00:57:27.415963 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-04-13 00:57:27.415967 | orchestrator | Sunday 13 April 2025 00:55:47 +0000 (0:00:01.801) 0:11:21.662 ********** 2025-04-13 00:57:27.415972 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-04-13 00:57:27.415977 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-04-13 00:57:27.415982 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-04-13 00:57:27.415986 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.415991 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.415996 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.416001 | orchestrator | 2025-04-13 00:57:27.416006 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-13 00:57:27.416010 | orchestrator | Sunday 13 April 2025 00:56:04 +0000 (0:00:16.975) 0:11:38.637 ********** 2025-04-13 00:57:27.416015 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.416020 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.416025 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.416029 | orchestrator | 2025-04-13 00:57:27.416034 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-04-13 00:57:27.416039 | orchestrator | Sunday 13 April 2025 00:56:05 +0000 (0:00:00.672) 0:11:39.310 ********** 2025-04-13 00:57:27.416044 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.416048 | orchestrator | 2025-04-13 00:57:27.416053 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-04-13 00:57:27.416058 | orchestrator | Sunday 13 April 2025 00:56:05 +0000 (0:00:00.782) 0:11:40.092 ********** 2025-04-13 00:57:27.416066 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.416071 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.416075 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.416080 | orchestrator | 2025-04-13 00:57:27.416085 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-04-13 00:57:27.416090 | orchestrator | Sunday 13 April 2025 00:56:06 +0000 (0:00:00.334) 0:11:40.427 ********** 2025-04-13 00:57:27.416094 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.416099 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.416104 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.416109 | orchestrator | 2025-04-13 00:57:27.416114 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-04-13 00:57:27.416118 | orchestrator | Sunday 13 April 2025 00:56:07 +0000 (0:00:01.183) 0:11:41.610 ********** 2025-04-13 00:57:27.416123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.416128 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.416133 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.416148 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.416153 | orchestrator | 2025-04-13 00:57:27.416158 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-04-13 00:57:27.416163 | orchestrator | Sunday 13 April 2025 00:56:08 +0000 (0:00:01.123) 0:11:42.734 ********** 2025-04-13 00:57:27.416168 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.416173 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.416177 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.416182 | orchestrator | 2025-04-13 00:57:27.416187 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-13 00:57:27.416192 | orchestrator | Sunday 13 April 2025 00:56:08 +0000 (0:00:00.360) 0:11:43.094 ********** 2025-04-13 00:57:27.416197 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.416201 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.416206 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.416211 | orchestrator | 2025-04-13 00:57:27.416216 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-04-13 00:57:27.416221 | orchestrator | 2025-04-13 00:57:27.416225 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-13 00:57:27.416230 | orchestrator | Sunday 13 April 2025 00:56:10 +0000 (0:00:01.997) 0:11:45.091 ********** 2025-04-13 00:57:27.416235 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.416242 | orchestrator | 2025-04-13 00:57:27.416247 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-13 00:57:27.416252 | orchestrator | Sunday 13 April 2025 00:56:11 +0000 (0:00:00.728) 0:11:45.819 ********** 2025-04-13 00:57:27.416257 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.416262 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.416267 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.416271 | orchestrator | 2025-04-13 00:57:27.416276 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-13 00:57:27.416281 | orchestrator | Sunday 13 April 2025 00:56:11 +0000 (0:00:00.326) 0:11:46.146 ********** 2025-04-13 00:57:27.416286 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.416293 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.416298 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.416303 | orchestrator | 2025-04-13 00:57:27.416308 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-13 00:57:27.416313 | orchestrator | Sunday 13 April 2025 00:56:12 +0000 (0:00:00.680) 0:11:46.827 ********** 2025-04-13 00:57:27.416318 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.416322 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.416332 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.416338 | orchestrator | 2025-04-13 00:57:27.416343 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-13 00:57:27.416351 | orchestrator | Sunday 13 April 2025 00:56:13 +0000 (0:00:01.043) 0:11:47.870 ********** 2025-04-13 00:57:27.416356 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.416361 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.416365 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.416370 | orchestrator | 2025-04-13 00:57:27.416375 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-13 00:57:27.416380 | orchestrator | Sunday 13 April 2025 00:56:14 +0000 (0:00:00.732) 0:11:48.602 ********** 2025-04-13 00:57:27.416385 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.416390 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.416394 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.416399 | orchestrator | 2025-04-13 00:57:27.416404 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-13 00:57:27.416409 | orchestrator | Sunday 13 April 2025 00:56:14 +0000 (0:00:00.314) 0:11:48.916 ********** 2025-04-13 00:57:27.416414 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.416419 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.416423 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.416428 | orchestrator | 2025-04-13 00:57:27.416433 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-13 00:57:27.416438 | orchestrator | Sunday 13 April 2025 00:56:14 +0000 (0:00:00.302) 0:11:49.219 ********** 2025-04-13 00:57:27.416443 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.416448 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.416452 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.416457 | orchestrator | 2025-04-13 00:57:27.416462 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-13 00:57:27.416467 | orchestrator | Sunday 13 April 2025 00:56:15 +0000 (0:00:00.634) 0:11:49.854 ********** 2025-04-13 00:57:27.416472 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.416477 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.416482 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.416486 | orchestrator | 2025-04-13 00:57:27.416491 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-13 00:57:27.416496 | orchestrator | Sunday 13 April 2025 00:56:15 +0000 (0:00:00.320) 0:11:50.174 ********** 2025-04-13 00:57:27.416501 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.416506 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.416510 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.416515 | orchestrator | 2025-04-13 00:57:27.416520 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-13 00:57:27.416525 | orchestrator | Sunday 13 April 2025 00:56:16 +0000 (0:00:00.322) 0:11:50.497 ********** 2025-04-13 00:57:27.416530 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.416535 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.416539 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.416544 | orchestrator | 2025-04-13 00:57:27.416549 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-13 00:57:27.416554 | orchestrator | Sunday 13 April 2025 00:56:16 +0000 (0:00:00.297) 0:11:50.795 ********** 2025-04-13 00:57:27.416559 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.416564 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.416568 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.416573 | orchestrator | 2025-04-13 00:57:27.416578 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-13 00:57:27.416583 | orchestrator | Sunday 13 April 2025 00:56:17 +0000 (0:00:01.054) 0:11:51.849 ********** 2025-04-13 00:57:27.416588 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.416593 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.416598 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.416603 | orchestrator | 2025-04-13 00:57:27.416607 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-13 00:57:27.416612 | orchestrator | Sunday 13 April 2025 00:56:17 +0000 (0:00:00.310) 0:11:52.159 ********** 2025-04-13 00:57:27.416620 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.416625 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.416630 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.416635 | orchestrator | 2025-04-13 00:57:27.416639 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-13 00:57:27.416644 | orchestrator | Sunday 13 April 2025 00:56:18 +0000 (0:00:00.328) 0:11:52.488 ********** 2025-04-13 00:57:27.416649 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.416654 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.416659 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.416663 | orchestrator | 2025-04-13 00:57:27.416668 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-13 00:57:27.416673 | orchestrator | Sunday 13 April 2025 00:56:18 +0000 (0:00:00.323) 0:11:52.812 ********** 2025-04-13 00:57:27.416678 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.416683 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.416688 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.416692 | orchestrator | 2025-04-13 00:57:27.416697 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-13 00:57:27.416702 | orchestrator | Sunday 13 April 2025 00:56:19 +0000 (0:00:00.622) 0:11:53.434 ********** 2025-04-13 00:57:27.416707 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.416712 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.416716 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.416721 | orchestrator | 2025-04-13 00:57:27.416726 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-13 00:57:27.416731 | orchestrator | Sunday 13 April 2025 00:56:19 +0000 (0:00:00.377) 0:11:53.812 ********** 2025-04-13 00:57:27.416736 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.416741 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.416745 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.416750 | orchestrator | 2025-04-13 00:57:27.416755 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-13 00:57:27.416760 | orchestrator | Sunday 13 April 2025 00:56:19 +0000 (0:00:00.337) 0:11:54.149 ********** 2025-04-13 00:57:27.416765 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.416769 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.416774 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.416779 | orchestrator | 2025-04-13 00:57:27.416786 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-13 00:57:27.416791 | orchestrator | Sunday 13 April 2025 00:56:20 +0000 (0:00:00.313) 0:11:54.463 ********** 2025-04-13 00:57:27.416796 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.416804 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.416809 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.416814 | orchestrator | 2025-04-13 00:57:27.416821 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-13 00:57:27.416826 | orchestrator | Sunday 13 April 2025 00:56:20 +0000 (0:00:00.601) 0:11:55.064 ********** 2025-04-13 00:57:27.416831 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.416836 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.416841 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.416846 | orchestrator | 2025-04-13 00:57:27.416851 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-13 00:57:27.416855 | orchestrator | Sunday 13 April 2025 00:56:21 +0000 (0:00:00.345) 0:11:55.410 ********** 2025-04-13 00:57:27.416860 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.416865 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.416870 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.416875 | orchestrator | 2025-04-13 00:57:27.416880 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-13 00:57:27.416884 | orchestrator | Sunday 13 April 2025 00:56:21 +0000 (0:00:00.392) 0:11:55.802 ********** 2025-04-13 00:57:27.416889 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.416900 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.416905 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.416910 | orchestrator | 2025-04-13 00:57:27.416915 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-13 00:57:27.416920 | orchestrator | Sunday 13 April 2025 00:56:21 +0000 (0:00:00.311) 0:11:56.114 ********** 2025-04-13 00:57:27.416925 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.416930 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.416934 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.416939 | orchestrator | 2025-04-13 00:57:27.416944 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-13 00:57:27.416949 | orchestrator | Sunday 13 April 2025 00:56:22 +0000 (0:00:00.630) 0:11:56.744 ********** 2025-04-13 00:57:27.416954 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.416959 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.416964 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.416968 | orchestrator | 2025-04-13 00:57:27.416973 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-13 00:57:27.416978 | orchestrator | Sunday 13 April 2025 00:56:22 +0000 (0:00:00.346) 0:11:57.091 ********** 2025-04-13 00:57:27.416983 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.416988 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.416993 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.416998 | orchestrator | 2025-04-13 00:57:27.417002 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-13 00:57:27.417007 | orchestrator | Sunday 13 April 2025 00:56:23 +0000 (0:00:00.349) 0:11:57.441 ********** 2025-04-13 00:57:27.417012 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417017 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417022 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.417027 | orchestrator | 2025-04-13 00:57:27.417032 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-13 00:57:27.417036 | orchestrator | Sunday 13 April 2025 00:56:23 +0000 (0:00:00.310) 0:11:57.751 ********** 2025-04-13 00:57:27.417041 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417046 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417051 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.417056 | orchestrator | 2025-04-13 00:57:27.417061 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-13 00:57:27.417066 | orchestrator | Sunday 13 April 2025 00:56:24 +0000 (0:00:00.590) 0:11:58.342 ********** 2025-04-13 00:57:27.417070 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417075 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417080 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.417085 | orchestrator | 2025-04-13 00:57:27.417090 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-13 00:57:27.417095 | orchestrator | Sunday 13 April 2025 00:56:24 +0000 (0:00:00.357) 0:11:58.699 ********** 2025-04-13 00:57:27.417100 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417105 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417110 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.417114 | orchestrator | 2025-04-13 00:57:27.417119 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-13 00:57:27.417124 | orchestrator | Sunday 13 April 2025 00:56:24 +0000 (0:00:00.331) 0:11:59.031 ********** 2025-04-13 00:57:27.417129 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417134 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417161 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.417167 | orchestrator | 2025-04-13 00:57:27.417172 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-13 00:57:27.417177 | orchestrator | Sunday 13 April 2025 00:56:25 +0000 (0:00:00.331) 0:11:59.363 ********** 2025-04-13 00:57:27.417185 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417190 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417195 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.417200 | orchestrator | 2025-04-13 00:57:27.417205 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-13 00:57:27.417209 | orchestrator | Sunday 13 April 2025 00:56:25 +0000 (0:00:00.621) 0:11:59.985 ********** 2025-04-13 00:57:27.417214 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417219 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417224 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.417229 | orchestrator | 2025-04-13 00:57:27.417234 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-13 00:57:27.417241 | orchestrator | Sunday 13 April 2025 00:56:26 +0000 (0:00:00.329) 0:12:00.315 ********** 2025-04-13 00:57:27.417246 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-13 00:57:27.417251 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-13 00:57:27.417256 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-13 00:57:27.417261 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-13 00:57:27.417266 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417271 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417276 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-13 00:57:27.417280 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-13 00:57:27.417285 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.417290 | orchestrator | 2025-04-13 00:57:27.417295 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-13 00:57:27.417300 | orchestrator | Sunday 13 April 2025 00:56:26 +0000 (0:00:00.427) 0:12:00.743 ********** 2025-04-13 00:57:27.417305 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-13 00:57:27.417312 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-13 00:57:27.417317 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417322 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-13 00:57:27.417327 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-13 00:57:27.417331 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417336 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-13 00:57:27.417341 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-13 00:57:27.417346 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.417351 | orchestrator | 2025-04-13 00:57:27.417355 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-13 00:57:27.417360 | orchestrator | Sunday 13 April 2025 00:56:26 +0000 (0:00:00.382) 0:12:01.125 ********** 2025-04-13 00:57:27.417365 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417370 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417375 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.417379 | orchestrator | 2025-04-13 00:57:27.417384 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-13 00:57:27.417389 | orchestrator | Sunday 13 April 2025 00:56:27 +0000 (0:00:00.620) 0:12:01.745 ********** 2025-04-13 00:57:27.417394 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417401 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417406 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.417411 | orchestrator | 2025-04-13 00:57:27.417415 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-13 00:57:27.417420 | orchestrator | Sunday 13 April 2025 00:56:27 +0000 (0:00:00.342) 0:12:02.087 ********** 2025-04-13 00:57:27.417425 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417430 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417435 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.417440 | orchestrator | 2025-04-13 00:57:27.417448 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-13 00:57:27.417453 | orchestrator | Sunday 13 April 2025 00:56:28 +0000 (0:00:00.329) 0:12:02.417 ********** 2025-04-13 00:57:27.417458 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417462 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417467 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.417472 | orchestrator | 2025-04-13 00:57:27.417477 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-13 00:57:27.417482 | orchestrator | Sunday 13 April 2025 00:56:28 +0000 (0:00:00.331) 0:12:02.748 ********** 2025-04-13 00:57:27.417486 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417491 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417496 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.417501 | orchestrator | 2025-04-13 00:57:27.417506 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-13 00:57:27.417511 | orchestrator | Sunday 13 April 2025 00:56:29 +0000 (0:00:00.641) 0:12:03.389 ********** 2025-04-13 00:57:27.417515 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417520 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417525 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.417530 | orchestrator | 2025-04-13 00:57:27.417535 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-13 00:57:27.417540 | orchestrator | Sunday 13 April 2025 00:56:29 +0000 (0:00:00.342) 0:12:03.731 ********** 2025-04-13 00:57:27.417544 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.417549 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.417554 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.417559 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417564 | orchestrator | 2025-04-13 00:57:27.417568 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-13 00:57:27.417573 | orchestrator | Sunday 13 April 2025 00:56:29 +0000 (0:00:00.438) 0:12:04.171 ********** 2025-04-13 00:57:27.417578 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.417583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.417588 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.417592 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417597 | orchestrator | 2025-04-13 00:57:27.417602 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-13 00:57:27.417607 | orchestrator | Sunday 13 April 2025 00:56:30 +0000 (0:00:00.448) 0:12:04.619 ********** 2025-04-13 00:57:27.417612 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.417617 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.417621 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.417626 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417631 | orchestrator | 2025-04-13 00:57:27.417636 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-13 00:57:27.417643 | orchestrator | Sunday 13 April 2025 00:56:30 +0000 (0:00:00.436) 0:12:05.056 ********** 2025-04-13 00:57:27.417648 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417653 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417657 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.417662 | orchestrator | 2025-04-13 00:57:27.417667 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-13 00:57:27.417672 | orchestrator | Sunday 13 April 2025 00:56:31 +0000 (0:00:00.329) 0:12:05.386 ********** 2025-04-13 00:57:27.417677 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-13 00:57:27.417681 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417686 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-13 00:57:27.417691 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417700 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-13 00:57:27.417705 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.417710 | orchestrator | 2025-04-13 00:57:27.417715 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-13 00:57:27.417720 | orchestrator | Sunday 13 April 2025 00:56:31 +0000 (0:00:00.828) 0:12:06.214 ********** 2025-04-13 00:57:27.417725 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417729 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417734 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.417739 | orchestrator | 2025-04-13 00:57:27.417744 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-13 00:57:27.417749 | orchestrator | Sunday 13 April 2025 00:56:32 +0000 (0:00:00.335) 0:12:06.550 ********** 2025-04-13 00:57:27.417753 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417758 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417763 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.417768 | orchestrator | 2025-04-13 00:57:27.417773 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-13 00:57:27.417778 | orchestrator | Sunday 13 April 2025 00:56:32 +0000 (0:00:00.355) 0:12:06.905 ********** 2025-04-13 00:57:27.417782 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-13 00:57:27.417787 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417792 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-13 00:57:27.417797 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417802 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-13 00:57:27.417806 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.417811 | orchestrator | 2025-04-13 00:57:27.417816 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-13 00:57:27.417821 | orchestrator | Sunday 13 April 2025 00:56:33 +0000 (0:00:00.754) 0:12:07.660 ********** 2025-04-13 00:57:27.417826 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-13 00:57:27.417833 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417838 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-13 00:57:27.417843 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417848 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-13 00:57:27.417853 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.417858 | orchestrator | 2025-04-13 00:57:27.417863 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-13 00:57:27.417867 | orchestrator | Sunday 13 April 2025 00:56:33 +0000 (0:00:00.336) 0:12:07.997 ********** 2025-04-13 00:57:27.417872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.417877 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.417882 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.417887 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417892 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-13 00:57:27.417896 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-13 00:57:27.417901 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-13 00:57:27.417906 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417911 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-13 00:57:27.417915 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-13 00:57:27.417920 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-13 00:57:27.417925 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.417930 | orchestrator | 2025-04-13 00:57:27.417935 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-13 00:57:27.417943 | orchestrator | Sunday 13 April 2025 00:56:34 +0000 (0:00:00.634) 0:12:08.631 ********** 2025-04-13 00:57:27.417948 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417953 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417957 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.417962 | orchestrator | 2025-04-13 00:57:27.417967 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-13 00:57:27.417972 | orchestrator | Sunday 13 April 2025 00:56:35 +0000 (0:00:00.838) 0:12:09.470 ********** 2025-04-13 00:57:27.417977 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-13 00:57:27.417981 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.417986 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-13 00:57:27.417991 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.417996 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-13 00:57:27.418001 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.418005 | orchestrator | 2025-04-13 00:57:27.418010 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-13 00:57:27.418029 | orchestrator | Sunday 13 April 2025 00:56:35 +0000 (0:00:00.641) 0:12:10.111 ********** 2025-04-13 00:57:27.418034 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.418044 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.418049 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.418054 | orchestrator | 2025-04-13 00:57:27.418059 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-13 00:57:27.418064 | orchestrator | Sunday 13 April 2025 00:56:36 +0000 (0:00:00.836) 0:12:10.948 ********** 2025-04-13 00:57:27.418068 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.418073 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.418078 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.418083 | orchestrator | 2025-04-13 00:57:27.418088 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-04-13 00:57:27.418092 | orchestrator | Sunday 13 April 2025 00:56:37 +0000 (0:00:00.496) 0:12:11.444 ********** 2025-04-13 00:57:27.418097 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.418102 | orchestrator | 2025-04-13 00:57:27.418107 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-04-13 00:57:27.418112 | orchestrator | Sunday 13 April 2025 00:56:37 +0000 (0:00:00.627) 0:12:12.071 ********** 2025-04-13 00:57:27.418117 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-04-13 00:57:27.418121 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-04-13 00:57:27.418126 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-04-13 00:57:27.418131 | orchestrator | 2025-04-13 00:57:27.418136 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-04-13 00:57:27.418154 | orchestrator | Sunday 13 April 2025 00:56:38 +0000 (0:00:00.618) 0:12:12.690 ********** 2025-04-13 00:57:27.418159 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-13 00:57:27.418164 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-13 00:57:27.418169 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-13 00:57:27.418173 | orchestrator | 2025-04-13 00:57:27.418178 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-04-13 00:57:27.418183 | orchestrator | Sunday 13 April 2025 00:56:40 +0000 (0:00:01.805) 0:12:14.495 ********** 2025-04-13 00:57:27.418188 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-13 00:57:27.418193 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-13 00:57:27.418198 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.418203 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-13 00:57:27.418208 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-13 00:57:27.418213 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.418221 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-13 00:57:27.418226 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-13 00:57:27.418231 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.418236 | orchestrator | 2025-04-13 00:57:27.418241 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-04-13 00:57:27.418246 | orchestrator | Sunday 13 April 2025 00:56:41 +0000 (0:00:01.421) 0:12:15.917 ********** 2025-04-13 00:57:27.418251 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.418256 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.418261 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.418265 | orchestrator | 2025-04-13 00:57:27.418270 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-04-13 00:57:27.418275 | orchestrator | Sunday 13 April 2025 00:56:41 +0000 (0:00:00.331) 0:12:16.248 ********** 2025-04-13 00:57:27.418280 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.418285 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.418290 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.418294 | orchestrator | 2025-04-13 00:57:27.418299 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-04-13 00:57:27.418304 | orchestrator | Sunday 13 April 2025 00:56:42 +0000 (0:00:00.323) 0:12:16.572 ********** 2025-04-13 00:57:27.418309 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-04-13 00:57:27.418314 | orchestrator | 2025-04-13 00:57:27.418318 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-04-13 00:57:27.418323 | orchestrator | Sunday 13 April 2025 00:56:42 +0000 (0:00:00.230) 0:12:16.802 ********** 2025-04-13 00:57:27.418328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-13 00:57:27.418335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-13 00:57:27.418340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-13 00:57:27.418345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-13 00:57:27.418350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-13 00:57:27.418355 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.418360 | orchestrator | 2025-04-13 00:57:27.418365 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-04-13 00:57:27.418370 | orchestrator | Sunday 13 April 2025 00:56:43 +0000 (0:00:00.895) 0:12:17.698 ********** 2025-04-13 00:57:27.418375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-13 00:57:27.418379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-13 00:57:27.418386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-13 00:57:27.418391 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-13 00:57:27.418399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-13 00:57:27.418404 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.418409 | orchestrator | 2025-04-13 00:57:27.418413 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-04-13 00:57:27.418418 | orchestrator | Sunday 13 April 2025 00:56:44 +0000 (0:00:00.964) 0:12:18.662 ********** 2025-04-13 00:57:27.418423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-13 00:57:27.418433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-13 00:57:27.418438 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-13 00:57:27.418442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-13 00:57:27.418447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-13 00:57:27.418452 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.418457 | orchestrator | 2025-04-13 00:57:27.418462 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-04-13 00:57:27.418467 | orchestrator | Sunday 13 April 2025 00:56:45 +0000 (0:00:00.673) 0:12:19.335 ********** 2025-04-13 00:57:27.418472 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-13 00:57:27.418477 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-13 00:57:27.418482 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-13 00:57:27.418487 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-13 00:57:27.418491 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-13 00:57:27.418496 | orchestrator | 2025-04-13 00:57:27.418501 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-04-13 00:57:27.418506 | orchestrator | Sunday 13 April 2025 00:57:10 +0000 (0:00:25.251) 0:12:44.587 ********** 2025-04-13 00:57:27.418511 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.418515 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.418520 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.418525 | orchestrator | 2025-04-13 00:57:27.418530 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-04-13 00:57:27.418535 | orchestrator | Sunday 13 April 2025 00:57:10 +0000 (0:00:00.495) 0:12:45.082 ********** 2025-04-13 00:57:27.418540 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.418544 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.418549 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.418554 | orchestrator | 2025-04-13 00:57:27.418559 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-04-13 00:57:27.418564 | orchestrator | Sunday 13 April 2025 00:57:11 +0000 (0:00:00.359) 0:12:45.441 ********** 2025-04-13 00:57:27.418568 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.418573 | orchestrator | 2025-04-13 00:57:27.418581 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-04-13 00:57:27.418586 | orchestrator | Sunday 13 April 2025 00:57:11 +0000 (0:00:00.545) 0:12:45.987 ********** 2025-04-13 00:57:27.418591 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.418595 | orchestrator | 2025-04-13 00:57:27.418600 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-04-13 00:57:27.418605 | orchestrator | Sunday 13 April 2025 00:57:12 +0000 (0:00:00.876) 0:12:46.863 ********** 2025-04-13 00:57:27.418610 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.418618 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.418623 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.418628 | orchestrator | 2025-04-13 00:57:27.418633 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-04-13 00:57:27.418638 | orchestrator | Sunday 13 April 2025 00:57:13 +0000 (0:00:01.213) 0:12:48.077 ********** 2025-04-13 00:57:27.418642 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.418647 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.418652 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.418657 | orchestrator | 2025-04-13 00:57:27.418662 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-04-13 00:57:27.418668 | orchestrator | Sunday 13 April 2025 00:57:14 +0000 (0:00:01.120) 0:12:49.197 ********** 2025-04-13 00:57:27.418673 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.418678 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.418683 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.418688 | orchestrator | 2025-04-13 00:57:27.418692 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-04-13 00:57:27.418697 | orchestrator | Sunday 13 April 2025 00:57:16 +0000 (0:00:01.999) 0:12:51.197 ********** 2025-04-13 00:57:27.418702 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-04-13 00:57:27.418707 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-04-13 00:57:27.418712 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-04-13 00:57:27.418717 | orchestrator | 2025-04-13 00:57:27.418721 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-04-13 00:57:27.418726 | orchestrator | Sunday 13 April 2025 00:57:18 +0000 (0:00:01.870) 0:12:53.068 ********** 2025-04-13 00:57:27.418731 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.418736 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:57:27.418741 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:57:27.418745 | orchestrator | 2025-04-13 00:57:27.418750 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-13 00:57:27.418755 | orchestrator | Sunday 13 April 2025 00:57:19 +0000 (0:00:01.180) 0:12:54.248 ********** 2025-04-13 00:57:27.418760 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.418764 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.418769 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.418774 | orchestrator | 2025-04-13 00:57:27.418779 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-04-13 00:57:27.418784 | orchestrator | Sunday 13 April 2025 00:57:20 +0000 (0:00:00.673) 0:12:54.922 ********** 2025-04-13 00:57:27.418788 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:57:27.418793 | orchestrator | 2025-04-13 00:57:27.418798 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-04-13 00:57:27.418803 | orchestrator | Sunday 13 April 2025 00:57:21 +0000 (0:00:00.776) 0:12:55.699 ********** 2025-04-13 00:57:27.418808 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.418813 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.418817 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.418822 | orchestrator | 2025-04-13 00:57:27.418827 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-04-13 00:57:27.418832 | orchestrator | Sunday 13 April 2025 00:57:21 +0000 (0:00:00.336) 0:12:56.035 ********** 2025-04-13 00:57:27.418837 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.418841 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.418846 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.418851 | orchestrator | 2025-04-13 00:57:27.418856 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-04-13 00:57:27.418863 | orchestrator | Sunday 13 April 2025 00:57:22 +0000 (0:00:01.217) 0:12:57.253 ********** 2025-04-13 00:57:27.418868 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:57:27.418873 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:57:27.418878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:57:27.418883 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:57:27.418888 | orchestrator | 2025-04-13 00:57:27.418893 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-04-13 00:57:27.418898 | orchestrator | Sunday 13 April 2025 00:57:24 +0000 (0:00:01.071) 0:12:58.324 ********** 2025-04-13 00:57:27.418902 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:57:27.418907 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:57:27.418912 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:57:27.418917 | orchestrator | 2025-04-13 00:57:27.418922 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-13 00:57:27.418926 | orchestrator | Sunday 13 April 2025 00:57:24 +0000 (0:00:00.336) 0:12:58.661 ********** 2025-04-13 00:57:27.418931 | orchestrator | changed: [testbed-node-3] 2025-04-13 00:57:27.418936 | orchestrator | changed: [testbed-node-4] 2025-04-13 00:57:27.418941 | orchestrator | changed: [testbed-node-5] 2025-04-13 00:57:27.418946 | orchestrator | 2025-04-13 00:57:27.418950 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:57:27.418955 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-04-13 00:57:27.418960 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-04-13 00:57:27.418965 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-04-13 00:57:27.418970 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-04-13 00:57:27.418975 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-04-13 00:57:27.418980 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-04-13 00:57:27.418985 | orchestrator | 2025-04-13 00:57:27.418990 | orchestrator | 2025-04-13 00:57:27.418994 | orchestrator | 2025-04-13 00:57:27.419001 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 00:57:30.429408 | orchestrator | Sunday 13 April 2025 00:57:25 +0000 (0:00:01.301) 0:12:59.962 ********** 2025-04-13 00:57:30.429548 | orchestrator | =============================================================================== 2025-04-13 00:57:30.429580 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 41.76s 2025-04-13 00:57:30.429595 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image -- 35.19s 2025-04-13 00:57:30.429630 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 25.25s 2025-04-13 00:57:30.429643 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.49s 2025-04-13 00:57:30.429656 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 16.98s 2025-04-13 00:57:30.429668 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.51s 2025-04-13 00:57:30.429680 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.68s 2025-04-13 00:57:30.429692 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 7.92s 2025-04-13 00:57:30.429705 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 7.50s 2025-04-13 00:57:30.429717 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.80s 2025-04-13 00:57:30.429755 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 6.72s 2025-04-13 00:57:30.429768 | orchestrator | ceph-config : create ceph initial directories --------------------------- 5.50s 2025-04-13 00:57:30.429780 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 5.42s 2025-04-13 00:57:30.429792 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 5.10s 2025-04-13 00:57:30.429804 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 4.89s 2025-04-13 00:57:30.429816 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 4.05s 2025-04-13 00:57:30.429828 | orchestrator | ceph-handler : set _crash_handler_called after restart ------------------ 3.76s 2025-04-13 00:57:30.429840 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 3.63s 2025-04-13 00:57:30.429852 | orchestrator | ceph-crash : create client.crash keyring -------------------------------- 3.30s 2025-04-13 00:57:30.429864 | orchestrator | ceph-osd : systemd start osd -------------------------------------------- 3.23s 2025-04-13 00:57:30.429876 | orchestrator | 2025-04-13 00:57:27 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:57:30.429889 | orchestrator | 2025-04-13 00:57:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:57:30.429925 | orchestrator | 2025-04-13 00:57:30 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:57:30.430345 | orchestrator | 2025-04-13 00:57:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:57:30.431475 | orchestrator | 2025-04-13 00:57:30 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:57:33.481824 | orchestrator | 2025-04-13 00:57:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:57:33.481974 | orchestrator | 2025-04-13 00:57:33 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:57:33.482958 | orchestrator | 2025-04-13 00:57:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:57:33.484293 | orchestrator | 2025-04-13 00:57:33 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:57:33.484595 | orchestrator | 2025-04-13 00:57:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:57:36.538432 | orchestrator | 2025-04-13 00:57:36 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:57:36.540300 | orchestrator | 2025-04-13 00:57:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:57:36.542011 | orchestrator | 2025-04-13 00:57:36 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:57:36.542660 | orchestrator | 2025-04-13 00:57:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:57:39.595609 | orchestrator | 2025-04-13 00:57:39 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state STARTED 2025-04-13 00:57:39.598807 | orchestrator | 2025-04-13 00:57:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:57:39.599202 | orchestrator | 2025-04-13 00:57:39 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:57:42.666578 | orchestrator | 2025-04-13 00:57:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:57:42.666719 | orchestrator | 2025-04-13 00:57:42.666740 | orchestrator | 2025-04-13 00:57:42.666755 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-04-13 00:57:42.666770 | orchestrator | 2025-04-13 00:57:42.666784 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-04-13 00:57:42.666797 | orchestrator | Sunday 13 April 2025 00:54:19 +0000 (0:00:00.169) 0:00:00.169 ********** 2025-04-13 00:57:42.666836 | orchestrator | ok: [localhost] => { 2025-04-13 00:57:42.666852 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-04-13 00:57:42.666866 | orchestrator | } 2025-04-13 00:57:42.666880 | orchestrator | 2025-04-13 00:57:42.666894 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-04-13 00:57:42.666908 | orchestrator | Sunday 13 April 2025 00:54:19 +0000 (0:00:00.055) 0:00:00.225 ********** 2025-04-13 00:57:42.666921 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-04-13 00:57:42.666937 | orchestrator | ...ignoring 2025-04-13 00:57:42.666951 | orchestrator | 2025-04-13 00:57:42.666965 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-04-13 00:57:42.666979 | orchestrator | Sunday 13 April 2025 00:54:22 +0000 (0:00:02.551) 0:00:02.777 ********** 2025-04-13 00:57:42.666993 | orchestrator | skipping: [localhost] 2025-04-13 00:57:42.667006 | orchestrator | 2025-04-13 00:57:42.667021 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-04-13 00:57:42.667035 | orchestrator | Sunday 13 April 2025 00:54:22 +0000 (0:00:00.105) 0:00:02.882 ********** 2025-04-13 00:57:42.667048 | orchestrator | ok: [localhost] 2025-04-13 00:57:42.667062 | orchestrator | 2025-04-13 00:57:42.667076 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 00:57:42.667089 | orchestrator | 2025-04-13 00:57:42.667103 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 00:57:42.667117 | orchestrator | Sunday 13 April 2025 00:54:22 +0000 (0:00:00.157) 0:00:03.039 ********** 2025-04-13 00:57:42.667131 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:42.667174 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:42.667188 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:42.667202 | orchestrator | 2025-04-13 00:57:42.667216 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 00:57:42.667230 | orchestrator | Sunday 13 April 2025 00:54:22 +0000 (0:00:00.429) 0:00:03.468 ********** 2025-04-13 00:57:42.667243 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-04-13 00:57:42.667273 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-04-13 00:57:42.667287 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-04-13 00:57:42.667301 | orchestrator | 2025-04-13 00:57:42.667417 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-04-13 00:57:42.667436 | orchestrator | 2025-04-13 00:57:42.667450 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-04-13 00:57:42.667464 | orchestrator | Sunday 13 April 2025 00:54:23 +0000 (0:00:00.490) 0:00:03.959 ********** 2025-04-13 00:57:42.667478 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-13 00:57:42.667492 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-04-13 00:57:42.667505 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-04-13 00:57:42.667519 | orchestrator | 2025-04-13 00:57:42.667533 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-04-13 00:57:42.667546 | orchestrator | Sunday 13 April 2025 00:54:24 +0000 (0:00:00.627) 0:00:04.587 ********** 2025-04-13 00:57:42.667560 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:57:42.667575 | orchestrator | 2025-04-13 00:57:42.667588 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-04-13 00:57:42.667602 | orchestrator | Sunday 13 April 2025 00:54:24 +0000 (0:00:00.887) 0:00:05.474 ********** 2025-04-13 00:57:42.667635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-13 00:57:42.667665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-13 00:57:42.667682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-13 00:57:42.667711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-13 00:57:42.667728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-13 00:57:42.667743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-13 00:57:42.667757 | orchestrator | 2025-04-13 00:57:42.667771 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-04-13 00:57:42.667785 | orchestrator | Sunday 13 April 2025 00:54:29 +0000 (0:00:04.412) 0:00:09.887 ********** 2025-04-13 00:57:42.667799 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:42.667821 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:42.667835 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:42.667849 | orchestrator | 2025-04-13 00:57:42.667862 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-04-13 00:57:42.667876 | orchestrator | Sunday 13 April 2025 00:54:30 +0000 (0:00:00.809) 0:00:10.696 ********** 2025-04-13 00:57:42.667890 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:42.667904 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:42.667918 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:42.667938 | orchestrator | 2025-04-13 00:57:42.667952 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-04-13 00:57:42.667966 | orchestrator | Sunday 13 April 2025 00:54:31 +0000 (0:00:01.614) 0:00:12.310 ********** 2025-04-13 00:57:42.667988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-13 00:57:42.668005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-13 00:57:42.668020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-13 00:57:42.668049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-13 00:57:42.668065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-13 00:57:42.668080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-13 00:57:42.668095 | orchestrator | 2025-04-13 00:57:42.668109 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-04-13 00:57:42.668123 | orchestrator | Sunday 13 April 2025 00:54:37 +0000 (0:00:05.314) 0:00:17.624 ********** 2025-04-13 00:57:42.668174 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:42.668190 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:42.668210 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:42.668224 | orchestrator | 2025-04-13 00:57:42.668238 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-04-13 00:57:42.668252 | orchestrator | Sunday 13 April 2025 00:54:38 +0000 (0:00:01.182) 0:00:18.806 ********** 2025-04-13 00:57:42.668265 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:42.668279 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:42.668293 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:42.668307 | orchestrator | 2025-04-13 00:57:42.668321 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-04-13 00:57:42.668335 | orchestrator | Sunday 13 April 2025 00:54:47 +0000 (0:00:08.969) 0:00:27.776 ********** 2025-04-13 00:57:42.668358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-13 00:57:42.668374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-13 00:57:42.668397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-13 00:57:42.668418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-13 00:57:42.668434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-13 00:57:42.668449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-13 00:57:42.668572 | orchestrator | 2025-04-13 00:57:42.668591 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-04-13 00:57:42.668661 | orchestrator | Sunday 13 April 2025 00:54:51 +0000 (0:00:03.971) 0:00:31.747 ********** 2025-04-13 00:57:42.668678 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:42.668693 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:42.668707 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:42.668721 | orchestrator | 2025-04-13 00:57:42.668735 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-04-13 00:57:42.668749 | orchestrator | Sunday 13 April 2025 00:54:52 +0000 (0:00:01.059) 0:00:32.806 ********** 2025-04-13 00:57:42.668763 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:42.668778 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:42.668791 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:42.668805 | orchestrator | 2025-04-13 00:57:42.668819 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-04-13 00:57:42.668833 | orchestrator | Sunday 13 April 2025 00:54:52 +0000 (0:00:00.460) 0:00:33.267 ********** 2025-04-13 00:57:42.668847 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:42.668861 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:42.668875 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:42.668888 | orchestrator | 2025-04-13 00:57:42.668902 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-04-13 00:57:42.668916 | orchestrator | Sunday 13 April 2025 00:54:53 +0000 (0:00:00.437) 0:00:33.705 ********** 2025-04-13 00:57:42.668931 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-04-13 00:57:42.668945 | orchestrator | ...ignoring 2025-04-13 00:57:42.668959 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-04-13 00:57:42.668973 | orchestrator | ...ignoring 2025-04-13 00:57:42.668987 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-04-13 00:57:42.669001 | orchestrator | ...ignoring 2025-04-13 00:57:42.669015 | orchestrator | 2025-04-13 00:57:42.669029 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-04-13 00:57:42.669043 | orchestrator | Sunday 13 April 2025 00:55:03 +0000 (0:00:10.789) 0:00:44.494 ********** 2025-04-13 00:57:42.669057 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:42.669070 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:42.669084 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:42.669098 | orchestrator | 2025-04-13 00:57:42.669120 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-04-13 00:57:42.669212 | orchestrator | Sunday 13 April 2025 00:55:04 +0000 (0:00:00.580) 0:00:45.075 ********** 2025-04-13 00:57:42.669241 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:42.669264 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:42.669283 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:42.669295 | orchestrator | 2025-04-13 00:57:42.669316 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-04-13 00:57:42.669331 | orchestrator | Sunday 13 April 2025 00:55:05 +0000 (0:00:00.745) 0:00:45.821 ********** 2025-04-13 00:57:42.669345 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:42.669359 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:42.669373 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:42.669392 | orchestrator | 2025-04-13 00:57:42.669425 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-04-13 00:57:42.669446 | orchestrator | Sunday 13 April 2025 00:55:05 +0000 (0:00:00.446) 0:00:46.267 ********** 2025-04-13 00:57:42.669467 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:42.669499 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:42.669521 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:42.669542 | orchestrator | 2025-04-13 00:57:42.669556 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-04-13 00:57:42.669572 | orchestrator | Sunday 13 April 2025 00:55:06 +0000 (0:00:00.580) 0:00:46.848 ********** 2025-04-13 00:57:42.669586 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:42.669600 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:42.669614 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:42.669627 | orchestrator | 2025-04-13 00:57:42.669641 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-04-13 00:57:42.669656 | orchestrator | Sunday 13 April 2025 00:55:06 +0000 (0:00:00.567) 0:00:47.415 ********** 2025-04-13 00:57:42.669671 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:42.669684 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:42.669698 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:42.669710 | orchestrator | 2025-04-13 00:57:42.669722 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-04-13 00:57:42.669735 | orchestrator | Sunday 13 April 2025 00:55:07 +0000 (0:00:00.534) 0:00:47.949 ********** 2025-04-13 00:57:42.669747 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:42.669759 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:42.669771 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-04-13 00:57:42.669783 | orchestrator | 2025-04-13 00:57:42.669796 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-04-13 00:57:42.669808 | orchestrator | Sunday 13 April 2025 00:55:07 +0000 (0:00:00.483) 0:00:48.433 ********** 2025-04-13 00:57:42.669820 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:42.669832 | orchestrator | 2025-04-13 00:57:42.669844 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-04-13 00:57:42.669857 | orchestrator | Sunday 13 April 2025 00:55:18 +0000 (0:00:10.842) 0:00:59.275 ********** 2025-04-13 00:57:42.669869 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:42.669881 | orchestrator | 2025-04-13 00:57:42.669893 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-04-13 00:57:42.669906 | orchestrator | Sunday 13 April 2025 00:55:19 +0000 (0:00:00.240) 0:00:59.516 ********** 2025-04-13 00:57:42.669918 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:42.669930 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:42.669942 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:42.669955 | orchestrator | 2025-04-13 00:57:42.669967 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-04-13 00:57:42.669979 | orchestrator | Sunday 13 April 2025 00:55:20 +0000 (0:00:01.466) 0:01:00.982 ********** 2025-04-13 00:57:42.669991 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:42.670003 | orchestrator | 2025-04-13 00:57:42.670048 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-04-13 00:57:42.670063 | orchestrator | Sunday 13 April 2025 00:55:29 +0000 (0:00:09.365) 0:01:10.348 ********** 2025-04-13 00:57:42.670076 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:42.670088 | orchestrator | 2025-04-13 00:57:42.670100 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-04-13 00:57:42.670112 | orchestrator | Sunday 13 April 2025 00:55:31 +0000 (0:00:01.581) 0:01:11.929 ********** 2025-04-13 00:57:42.670124 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:42.670159 | orchestrator | 2025-04-13 00:57:42.670171 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-04-13 00:57:42.670184 | orchestrator | Sunday 13 April 2025 00:55:34 +0000 (0:00:02.673) 0:01:14.602 ********** 2025-04-13 00:57:42.670196 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:42.670208 | orchestrator | 2025-04-13 00:57:42.670220 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-04-13 00:57:42.670232 | orchestrator | Sunday 13 April 2025 00:55:34 +0000 (0:00:00.134) 0:01:14.736 ********** 2025-04-13 00:57:42.670251 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:42.670264 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:42.670287 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:42.670299 | orchestrator | 2025-04-13 00:57:42.670312 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-04-13 00:57:42.670324 | orchestrator | Sunday 13 April 2025 00:55:34 +0000 (0:00:00.452) 0:01:15.189 ********** 2025-04-13 00:57:42.670336 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:42.670348 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:42.670360 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:42.670373 | orchestrator | 2025-04-13 00:57:42.670385 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-04-13 00:57:42.670398 | orchestrator | Sunday 13 April 2025 00:55:35 +0000 (0:00:00.461) 0:01:15.651 ********** 2025-04-13 00:57:42.670410 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-04-13 00:57:42.670422 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:42.670434 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:42.670447 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:42.670459 | orchestrator | 2025-04-13 00:57:42.670475 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-04-13 00:57:42.670488 | orchestrator | skipping: no hosts matched 2025-04-13 00:57:42.670500 | orchestrator | 2025-04-13 00:57:42.670513 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-04-13 00:57:42.670525 | orchestrator | 2025-04-13 00:57:42.670537 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-04-13 00:57:42.670549 | orchestrator | Sunday 13 April 2025 00:55:49 +0000 (0:00:14.639) 0:01:30.290 ********** 2025-04-13 00:57:42.670561 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:57:42.670574 | orchestrator | 2025-04-13 00:57:42.670586 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-04-13 00:57:42.670598 | orchestrator | Sunday 13 April 2025 00:56:09 +0000 (0:00:20.191) 0:01:50.481 ********** 2025-04-13 00:57:42.670618 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:42.670631 | orchestrator | 2025-04-13 00:57:42.670644 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-04-13 00:57:42.670656 | orchestrator | Sunday 13 April 2025 00:56:25 +0000 (0:00:15.591) 0:02:06.073 ********** 2025-04-13 00:57:42.670668 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:42.670680 | orchestrator | 2025-04-13 00:57:42.670693 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-04-13 00:57:42.670705 | orchestrator | 2025-04-13 00:57:42.670717 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-04-13 00:57:42.670729 | orchestrator | Sunday 13 April 2025 00:56:28 +0000 (0:00:02.750) 0:02:08.823 ********** 2025-04-13 00:57:42.670741 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:57:42.670754 | orchestrator | 2025-04-13 00:57:42.670766 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-04-13 00:57:42.670778 | orchestrator | Sunday 13 April 2025 00:56:43 +0000 (0:00:15.001) 0:02:23.825 ********** 2025-04-13 00:57:42.670791 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:42.670803 | orchestrator | 2025-04-13 00:57:42.670815 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-04-13 00:57:42.670827 | orchestrator | Sunday 13 April 2025 00:57:03 +0000 (0:00:20.686) 0:02:44.511 ********** 2025-04-13 00:57:42.670840 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:42.670852 | orchestrator | 2025-04-13 00:57:42.670864 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-04-13 00:57:42.670876 | orchestrator | 2025-04-13 00:57:42.670888 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-04-13 00:57:42.670900 | orchestrator | Sunday 13 April 2025 00:57:06 +0000 (0:00:02.534) 0:02:47.046 ********** 2025-04-13 00:57:42.670912 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:42.670924 | orchestrator | 2025-04-13 00:57:42.670944 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-04-13 00:57:42.670956 | orchestrator | Sunday 13 April 2025 00:57:19 +0000 (0:00:13.111) 0:03:00.158 ********** 2025-04-13 00:57:42.670968 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:42.670980 | orchestrator | 2025-04-13 00:57:42.670993 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-04-13 00:57:42.671005 | orchestrator | Sunday 13 April 2025 00:57:24 +0000 (0:00:04.532) 0:03:04.691 ********** 2025-04-13 00:57:42.671017 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:42.671029 | orchestrator | 2025-04-13 00:57:42.671042 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-04-13 00:57:42.671054 | orchestrator | 2025-04-13 00:57:42.671066 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-04-13 00:57:42.671078 | orchestrator | Sunday 13 April 2025 00:57:26 +0000 (0:00:02.679) 0:03:07.371 ********** 2025-04-13 00:57:42.671090 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:57:42.671102 | orchestrator | 2025-04-13 00:57:42.671114 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-04-13 00:57:42.671127 | orchestrator | Sunday 13 April 2025 00:57:27 +0000 (0:00:00.760) 0:03:08.131 ********** 2025-04-13 00:57:42.671158 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:42.671171 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:42.671187 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:42.671208 | orchestrator | 2025-04-13 00:57:42.671228 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-04-13 00:57:42.671248 | orchestrator | Sunday 13 April 2025 00:57:30 +0000 (0:00:02.597) 0:03:10.728 ********** 2025-04-13 00:57:42.671269 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:42.671290 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:42.671303 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:42.671315 | orchestrator | 2025-04-13 00:57:42.671335 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-04-13 00:57:42.671355 | orchestrator | Sunday 13 April 2025 00:57:32 +0000 (0:00:02.156) 0:03:12.885 ********** 2025-04-13 00:57:42.671375 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:42.671394 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:42.671413 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:42.671434 | orchestrator | 2025-04-13 00:57:42.671462 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-04-13 00:57:42.671484 | orchestrator | Sunday 13 April 2025 00:57:34 +0000 (0:00:02.371) 0:03:15.256 ********** 2025-04-13 00:57:42.671504 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:42.671520 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:42.671533 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:57:42.671545 | orchestrator | 2025-04-13 00:57:42.671558 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-04-13 00:57:42.671570 | orchestrator | Sunday 13 April 2025 00:57:36 +0000 (0:00:02.224) 0:03:17.481 ********** 2025-04-13 00:57:42.671582 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:57:42.671594 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:57:42.671606 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:57:42.671618 | orchestrator | 2025-04-13 00:57:42.671631 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-04-13 00:57:42.671643 | orchestrator | Sunday 13 April 2025 00:57:40 +0000 (0:00:03.572) 0:03:21.053 ********** 2025-04-13 00:57:42.671655 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:57:42.671667 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:57:42.671679 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:57:42.671691 | orchestrator | 2025-04-13 00:57:42.671703 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:57:42.671716 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-04-13 00:57:42.671738 | orchestrator | testbed-node-0 : ok=34  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-04-13 00:57:42.671760 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-04-13 00:57:45.716636 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-04-13 00:57:45.716739 | orchestrator | 2025-04-13 00:57:45.716754 | orchestrator | 2025-04-13 00:57:45.716768 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 00:57:45.716780 | orchestrator | Sunday 13 April 2025 00:57:40 +0000 (0:00:00.380) 0:03:21.434 ********** 2025-04-13 00:57:45.716791 | orchestrator | =============================================================================== 2025-04-13 00:57:45.716803 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.28s 2025-04-13 00:57:45.716814 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 35.19s 2025-04-13 00:57:45.716825 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 14.64s 2025-04-13 00:57:45.716836 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 13.11s 2025-04-13 00:57:45.716847 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.84s 2025-04-13 00:57:45.716858 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.79s 2025-04-13 00:57:45.716869 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 9.37s 2025-04-13 00:57:45.716880 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 8.97s 2025-04-13 00:57:45.716891 | orchestrator | mariadb : Copying over config.json files for services ------------------- 5.31s 2025-04-13 00:57:45.716902 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.29s 2025-04-13 00:57:45.716913 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.53s 2025-04-13 00:57:45.716924 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.41s 2025-04-13 00:57:45.716935 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.97s 2025-04-13 00:57:45.716946 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.57s 2025-04-13 00:57:45.716956 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.68s 2025-04-13 00:57:45.716967 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.67s 2025-04-13 00:57:45.716978 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.60s 2025-04-13 00:57:45.716989 | orchestrator | Check MariaDB service --------------------------------------------------- 2.55s 2025-04-13 00:57:45.717000 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.37s 2025-04-13 00:57:45.717011 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.22s 2025-04-13 00:57:45.717022 | orchestrator | 2025-04-13 00:57:42 | INFO  | Task fcda8527-a297-4603-bdbc-d0f712414d1c is in state SUCCESS 2025-04-13 00:57:45.717034 | orchestrator | 2025-04-13 00:57:42 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:57:45.717045 | orchestrator | 2025-04-13 00:57:42 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:57:45.717070 | orchestrator | 2025-04-13 00:57:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:57:45.717083 | orchestrator | 2025-04-13 00:57:42 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:57:45.717094 | orchestrator | 2025-04-13 00:57:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:57:45.717121 | orchestrator | 2025-04-13 00:57:45 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:57:45.717454 | orchestrator | 2025-04-13 00:57:45 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:57:45.717485 | orchestrator | 2025-04-13 00:57:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:57:45.719026 | orchestrator | 2025-04-13 00:57:45 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:57:48.754629 | orchestrator | 2025-04-13 00:57:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:57:48.754766 | orchestrator | 2025-04-13 00:57:48 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:57:48.755199 | orchestrator | 2025-04-13 00:57:48 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:57:48.756129 | orchestrator | 2025-04-13 00:57:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:57:48.757361 | orchestrator | 2025-04-13 00:57:48 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:57:51.795052 | orchestrator | 2025-04-13 00:57:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:57:51.795219 | orchestrator | 2025-04-13 00:57:51 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:57:51.800185 | orchestrator | 2025-04-13 00:57:51 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:57:51.802399 | orchestrator | 2025-04-13 00:57:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:57:51.803486 | orchestrator | 2025-04-13 00:57:51 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:57:54.839456 | orchestrator | 2025-04-13 00:57:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:57:54.839596 | orchestrator | 2025-04-13 00:57:54 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:57:54.840057 | orchestrator | 2025-04-13 00:57:54 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:57:54.840096 | orchestrator | 2025-04-13 00:57:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:57:54.841209 | orchestrator | 2025-04-13 00:57:54 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:57:57.877003 | orchestrator | 2025-04-13 00:57:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:57:57.877312 | orchestrator | 2025-04-13 00:57:57 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:57:57.877796 | orchestrator | 2025-04-13 00:57:57 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:57:57.877843 | orchestrator | 2025-04-13 00:57:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:57:57.878553 | orchestrator | 2025-04-13 00:57:57 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:57:57.878690 | orchestrator | 2025-04-13 00:57:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:58:00.935700 | orchestrator | 2025-04-13 00:58:00 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:58:00.936012 | orchestrator | 2025-04-13 00:58:00 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:58:00.944454 | orchestrator | 2025-04-13 00:58:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:58:00.945308 | orchestrator | 2025-04-13 00:58:00 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:58:03.990592 | orchestrator | 2025-04-13 00:58:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:58:03.990739 | orchestrator | 2025-04-13 00:58:03 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:58:03.991394 | orchestrator | 2025-04-13 00:58:03 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:58:03.992940 | orchestrator | 2025-04-13 00:58:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:58:03.994934 | orchestrator | 2025-04-13 00:58:03 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:58:03.996464 | orchestrator | 2025-04-13 00:58:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:58:07.048941 | orchestrator | 2025-04-13 00:58:07 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:58:07.049352 | orchestrator | 2025-04-13 00:58:07 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:58:07.050164 | orchestrator | 2025-04-13 00:58:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:58:07.051250 | orchestrator | 2025-04-13 00:58:07 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:58:10.106614 | orchestrator | 2025-04-13 00:58:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:58:10.106757 | orchestrator | 2025-04-13 00:58:10 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:58:10.107035 | orchestrator | 2025-04-13 00:58:10 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:58:10.109782 | orchestrator | 2025-04-13 00:58:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:58:10.112082 | orchestrator | 2025-04-13 00:58:10 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:58:13.155626 | orchestrator | 2025-04-13 00:58:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:58:13.155782 | orchestrator | 2025-04-13 00:58:13 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:58:13.155937 | orchestrator | 2025-04-13 00:58:13 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:58:13.157315 | orchestrator | 2025-04-13 00:58:13 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:58:13.158879 | orchestrator | 2025-04-13 00:58:13 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:58:16.209499 | orchestrator | 2025-04-13 00:58:13 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:58:16.209647 | orchestrator | 2025-04-13 00:58:16 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:58:16.210235 | orchestrator | 2025-04-13 00:58:16 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:58:16.211887 | orchestrator | 2025-04-13 00:58:16 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:58:16.212680 | orchestrator | 2025-04-13 00:58:16 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:58:19.252678 | orchestrator | 2025-04-13 00:58:16 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:58:19.252820 | orchestrator | 2025-04-13 00:58:19 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:58:19.255518 | orchestrator | 2025-04-13 00:58:19 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:58:19.256287 | orchestrator | 2025-04-13 00:58:19 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:58:19.257943 | orchestrator | 2025-04-13 00:58:19 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:58:22.297780 | orchestrator | 2025-04-13 00:58:19 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:58:22.297930 | orchestrator | 2025-04-13 00:58:22 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:58:22.298156 | orchestrator | 2025-04-13 00:58:22 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:58:22.298850 | orchestrator | 2025-04-13 00:58:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:58:22.299774 | orchestrator | 2025-04-13 00:58:22 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:58:22.300444 | orchestrator | 2025-04-13 00:58:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:58:25.351810 | orchestrator | 2025-04-13 00:58:25 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:58:25.352917 | orchestrator | 2025-04-13 00:58:25 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:58:25.357911 | orchestrator | 2025-04-13 00:58:25 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:58:25.360013 | orchestrator | 2025-04-13 00:58:25 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:58:28.416907 | orchestrator | 2025-04-13 00:58:25 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:58:28.417030 | orchestrator | 2025-04-13 00:58:28 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:58:28.418409 | orchestrator | 2025-04-13 00:58:28 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:58:28.420423 | orchestrator | 2025-04-13 00:58:28 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:58:28.421997 | orchestrator | 2025-04-13 00:58:28 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:58:31.473979 | orchestrator | 2025-04-13 00:58:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:58:31.474242 | orchestrator | 2025-04-13 00:58:31 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:58:31.474480 | orchestrator | 2025-04-13 00:58:31 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:58:31.475957 | orchestrator | 2025-04-13 00:58:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:58:31.477585 | orchestrator | 2025-04-13 00:58:31 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:58:34.535654 | orchestrator | 2025-04-13 00:58:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:58:34.535791 | orchestrator | 2025-04-13 00:58:34 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:58:34.540445 | orchestrator | 2025-04-13 00:58:34 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:58:34.540507 | orchestrator | 2025-04-13 00:58:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:58:34.542686 | orchestrator | 2025-04-13 00:58:34 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:58:37.593708 | orchestrator | 2025-04-13 00:58:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:58:37.593844 | orchestrator | 2025-04-13 00:58:37 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:58:37.594725 | orchestrator | 2025-04-13 00:58:37 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:58:37.595932 | orchestrator | 2025-04-13 00:58:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:58:37.596961 | orchestrator | 2025-04-13 00:58:37 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:58:40.650860 | orchestrator | 2025-04-13 00:58:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:58:40.651027 | orchestrator | 2025-04-13 00:58:40 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:58:40.652052 | orchestrator | 2025-04-13 00:58:40 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:58:40.654400 | orchestrator | 2025-04-13 00:58:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:58:40.656036 | orchestrator | 2025-04-13 00:58:40 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:58:43.705617 | orchestrator | 2025-04-13 00:58:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:58:43.705743 | orchestrator | 2025-04-13 00:58:43 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:58:43.709356 | orchestrator | 2025-04-13 00:58:43 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:58:43.710909 | orchestrator | 2025-04-13 00:58:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:58:46.767941 | orchestrator | 2025-04-13 00:58:43 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:58:46.768120 | orchestrator | 2025-04-13 00:58:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:58:46.768189 | orchestrator | 2025-04-13 00:58:46 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:58:46.769254 | orchestrator | 2025-04-13 00:58:46 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:58:46.771934 | orchestrator | 2025-04-13 00:58:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:58:46.774181 | orchestrator | 2025-04-13 00:58:46 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:58:49.821325 | orchestrator | 2025-04-13 00:58:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:58:49.821464 | orchestrator | 2025-04-13 00:58:49 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:58:49.822366 | orchestrator | 2025-04-13 00:58:49 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:58:49.824834 | orchestrator | 2025-04-13 00:58:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:58:49.825960 | orchestrator | 2025-04-13 00:58:49 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:58:52.877725 | orchestrator | 2025-04-13 00:58:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:58:52.877865 | orchestrator | 2025-04-13 00:58:52 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:58:52.878504 | orchestrator | 2025-04-13 00:58:52 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:58:52.882318 | orchestrator | 2025-04-13 00:58:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:58:52.885737 | orchestrator | 2025-04-13 00:58:52 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:58:55.931349 | orchestrator | 2025-04-13 00:58:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:58:55.931452 | orchestrator | 2025-04-13 00:58:55 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:58:55.932355 | orchestrator | 2025-04-13 00:58:55 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:58:55.933953 | orchestrator | 2025-04-13 00:58:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:58:55.937166 | orchestrator | 2025-04-13 00:58:55 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:58:58.991640 | orchestrator | 2025-04-13 00:58:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:58:58.993446 | orchestrator | 2025-04-13 00:58:58 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:58:58.997405 | orchestrator | 2025-04-13 00:58:58 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:58:58.997494 | orchestrator | 2025-04-13 00:58:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:58:58.998113 | orchestrator | 2025-04-13 00:58:58 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:58:58.998872 | orchestrator | 2025-04-13 00:58:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:59:02.047510 | orchestrator | 2025-04-13 00:59:02 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:59:02.048776 | orchestrator | 2025-04-13 00:59:02 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:59:02.052823 | orchestrator | 2025-04-13 00:59:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:59:02.055484 | orchestrator | 2025-04-13 00:59:02 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:59:05.109371 | orchestrator | 2025-04-13 00:59:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:59:05.109498 | orchestrator | 2025-04-13 00:59:05 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:59:05.111905 | orchestrator | 2025-04-13 00:59:05 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:59:05.114103 | orchestrator | 2025-04-13 00:59:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:59:05.116360 | orchestrator | 2025-04-13 00:59:05 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:59:08.180426 | orchestrator | 2025-04-13 00:59:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:59:08.180599 | orchestrator | 2025-04-13 00:59:08 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:59:08.182972 | orchestrator | 2025-04-13 00:59:08 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:59:08.183039 | orchestrator | 2025-04-13 00:59:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:59:08.184966 | orchestrator | 2025-04-13 00:59:08 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:59:08.186405 | orchestrator | 2025-04-13 00:59:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:59:11.236703 | orchestrator | 2025-04-13 00:59:11 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:59:11.237841 | orchestrator | 2025-04-13 00:59:11 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:59:11.240863 | orchestrator | 2025-04-13 00:59:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:59:11.242518 | orchestrator | 2025-04-13 00:59:11 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:59:14.295083 | orchestrator | 2025-04-13 00:59:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:59:14.295229 | orchestrator | 2025-04-13 00:59:14 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:59:14.296485 | orchestrator | 2025-04-13 00:59:14 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:59:14.297492 | orchestrator | 2025-04-13 00:59:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:59:14.298584 | orchestrator | 2025-04-13 00:59:14 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:59:17.351872 | orchestrator | 2025-04-13 00:59:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:59:17.352011 | orchestrator | 2025-04-13 00:59:17 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state STARTED 2025-04-13 00:59:17.355999 | orchestrator | 2025-04-13 00:59:17 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:59:17.363728 | orchestrator | 2025-04-13 00:59:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:59:20.412728 | orchestrator | 2025-04-13 00:59:17 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:59:20.412855 | orchestrator | 2025-04-13 00:59:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:59:20.412914 | orchestrator | 2025-04-13 00:59:20 | INFO  | Task e6acefa9-078c-4194-b2f5-88097151d17b is in state SUCCESS 2025-04-13 00:59:20.414498 | orchestrator | 2025-04-13 00:59:20.414548 | orchestrator | 2025-04-13 00:59:20.414563 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 00:59:20.414578 | orchestrator | 2025-04-13 00:59:20.414592 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 00:59:20.414617 | orchestrator | Sunday 13 April 2025 00:57:44 +0000 (0:00:00.338) 0:00:00.338 ********** 2025-04-13 00:59:20.414633 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:59:20.414671 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:59:20.414685 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:59:20.414699 | orchestrator | 2025-04-13 00:59:20.414713 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 00:59:20.414727 | orchestrator | Sunday 13 April 2025 00:57:45 +0000 (0:00:00.421) 0:00:00.760 ********** 2025-04-13 00:59:20.414741 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-04-13 00:59:20.414755 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-04-13 00:59:20.414769 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-04-13 00:59:20.414783 | orchestrator | 2025-04-13 00:59:20.414796 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-04-13 00:59:20.414810 | orchestrator | 2025-04-13 00:59:20.414823 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-13 00:59:20.414837 | orchestrator | Sunday 13 April 2025 00:57:45 +0000 (0:00:00.310) 0:00:01.070 ********** 2025-04-13 00:59:20.414851 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:59:20.414866 | orchestrator | 2025-04-13 00:59:20.414880 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-04-13 00:59:20.414894 | orchestrator | Sunday 13 April 2025 00:57:46 +0000 (0:00:00.801) 0:00:01.872 ********** 2025-04-13 00:59:20.414914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-13 00:59:20.414991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-13 00:59:20.415020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-13 00:59:20.415036 | orchestrator | 2025-04-13 00:59:20.415051 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-04-13 00:59:20.415065 | orchestrator | Sunday 13 April 2025 00:57:47 +0000 (0:00:01.733) 0:00:03.605 ********** 2025-04-13 00:59:20.415082 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:59:20.415098 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:59:20.415114 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:59:20.415151 | orchestrator | 2025-04-13 00:59:20.415167 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-13 00:59:20.415183 | orchestrator | Sunday 13 April 2025 00:57:48 +0000 (0:00:00.318) 0:00:03.924 ********** 2025-04-13 00:59:20.415204 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-04-13 00:59:20.415220 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-04-13 00:59:20.415236 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-04-13 00:59:20.415251 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-04-13 00:59:20.415267 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-04-13 00:59:20.415282 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-04-13 00:59:20.415298 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-04-13 00:59:20.415313 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-04-13 00:59:20.415329 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-04-13 00:59:20.415345 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-04-13 00:59:20.415367 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-04-13 00:59:20.415382 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-04-13 00:59:20.415398 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-04-13 00:59:20.415413 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-04-13 00:59:20.415428 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-04-13 00:59:20.415442 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-04-13 00:59:20.415455 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-04-13 00:59:20.415469 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-04-13 00:59:20.415483 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-04-13 00:59:20.415503 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-04-13 00:59:20.415517 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-04-13 00:59:20.415532 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-04-13 00:59:20.415551 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-04-13 00:59:20.415566 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-04-13 00:59:20.415579 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-04-13 00:59:20.415594 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-04-13 00:59:20.415609 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-04-13 00:59:20.415623 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-04-13 00:59:20.415637 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-04-13 00:59:20.415651 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-04-13 00:59:20.415664 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-04-13 00:59:20.415678 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-04-13 00:59:20.415692 | orchestrator | 2025-04-13 00:59:20.415706 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-13 00:59:20.415720 | orchestrator | Sunday 13 April 2025 00:57:49 +0000 (0:00:00.970) 0:00:04.895 ********** 2025-04-13 00:59:20.415733 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:59:20.415747 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:59:20.415761 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:59:20.415775 | orchestrator | 2025-04-13 00:59:20.415788 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-13 00:59:20.415803 | orchestrator | Sunday 13 April 2025 00:57:49 +0000 (0:00:00.413) 0:00:05.309 ********** 2025-04-13 00:59:20.415823 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.415838 | orchestrator | 2025-04-13 00:59:20.415858 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-13 00:59:20.415872 | orchestrator | Sunday 13 April 2025 00:57:49 +0000 (0:00:00.151) 0:00:05.461 ********** 2025-04-13 00:59:20.415886 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.415900 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:59:20.415913 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:59:20.415928 | orchestrator | 2025-04-13 00:59:20.415941 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-13 00:59:20.415955 | orchestrator | Sunday 13 April 2025 00:57:50 +0000 (0:00:00.419) 0:00:05.880 ********** 2025-04-13 00:59:20.415969 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:59:20.415983 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:59:20.415997 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:59:20.416016 | orchestrator | 2025-04-13 00:59:20.416030 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-13 00:59:20.416044 | orchestrator | Sunday 13 April 2025 00:57:50 +0000 (0:00:00.360) 0:00:06.241 ********** 2025-04-13 00:59:20.416058 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.416072 | orchestrator | 2025-04-13 00:59:20.416085 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-13 00:59:20.416099 | orchestrator | Sunday 13 April 2025 00:57:50 +0000 (0:00:00.274) 0:00:06.515 ********** 2025-04-13 00:59:20.416113 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.416142 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:59:20.416157 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:59:20.416170 | orchestrator | 2025-04-13 00:59:20.416184 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-13 00:59:20.416198 | orchestrator | Sunday 13 April 2025 00:57:51 +0000 (0:00:00.300) 0:00:06.816 ********** 2025-04-13 00:59:20.416212 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:59:20.416226 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:59:20.416239 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:59:20.416253 | orchestrator | 2025-04-13 00:59:20.416267 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-13 00:59:20.416280 | orchestrator | Sunday 13 April 2025 00:57:51 +0000 (0:00:00.491) 0:00:07.308 ********** 2025-04-13 00:59:20.416294 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.416308 | orchestrator | 2025-04-13 00:59:20.416322 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-13 00:59:20.416335 | orchestrator | Sunday 13 April 2025 00:57:51 +0000 (0:00:00.144) 0:00:07.452 ********** 2025-04-13 00:59:20.416349 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.416363 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:59:20.416377 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:59:20.416391 | orchestrator | 2025-04-13 00:59:20.416404 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-13 00:59:20.416418 | orchestrator | Sunday 13 April 2025 00:57:52 +0000 (0:00:00.453) 0:00:07.905 ********** 2025-04-13 00:59:20.416432 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:59:20.416446 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:59:20.416459 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:59:20.416473 | orchestrator | 2025-04-13 00:59:20.416487 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-13 00:59:20.416500 | orchestrator | Sunday 13 April 2025 00:57:52 +0000 (0:00:00.450) 0:00:08.356 ********** 2025-04-13 00:59:20.416514 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.416528 | orchestrator | 2025-04-13 00:59:20.416542 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-13 00:59:20.416556 | orchestrator | Sunday 13 April 2025 00:57:52 +0000 (0:00:00.197) 0:00:08.554 ********** 2025-04-13 00:59:20.416569 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.416583 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:59:20.416604 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:59:20.416617 | orchestrator | 2025-04-13 00:59:20.416631 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-13 00:59:20.416645 | orchestrator | Sunday 13 April 2025 00:57:53 +0000 (0:00:00.435) 0:00:08.990 ********** 2025-04-13 00:59:20.416659 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:59:20.416673 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:59:20.416687 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:59:20.416700 | orchestrator | 2025-04-13 00:59:20.416714 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-13 00:59:20.416728 | orchestrator | Sunday 13 April 2025 00:57:53 +0000 (0:00:00.315) 0:00:09.305 ********** 2025-04-13 00:59:20.416742 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.416755 | orchestrator | 2025-04-13 00:59:20.416769 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-13 00:59:20.416783 | orchestrator | Sunday 13 April 2025 00:57:53 +0000 (0:00:00.293) 0:00:09.599 ********** 2025-04-13 00:59:20.416797 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.416811 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:59:20.416824 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:59:20.416838 | orchestrator | 2025-04-13 00:59:20.416857 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-13 00:59:20.416871 | orchestrator | Sunday 13 April 2025 00:57:54 +0000 (0:00:00.528) 0:00:10.127 ********** 2025-04-13 00:59:20.416885 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:59:20.416899 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:59:20.417030 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:59:20.417046 | orchestrator | 2025-04-13 00:59:20.417060 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-13 00:59:20.417074 | orchestrator | Sunday 13 April 2025 00:57:55 +0000 (0:00:00.669) 0:00:10.797 ********** 2025-04-13 00:59:20.417088 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.417102 | orchestrator | 2025-04-13 00:59:20.417115 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-13 00:59:20.417150 | orchestrator | Sunday 13 April 2025 00:57:55 +0000 (0:00:00.167) 0:00:10.964 ********** 2025-04-13 00:59:20.417164 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.417178 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:59:20.417192 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:59:20.417206 | orchestrator | 2025-04-13 00:59:20.417220 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-13 00:59:20.417234 | orchestrator | Sunday 13 April 2025 00:57:55 +0000 (0:00:00.552) 0:00:11.516 ********** 2025-04-13 00:59:20.417255 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:59:20.417269 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:59:20.417283 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:59:20.417297 | orchestrator | 2025-04-13 00:59:20.417311 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-13 00:59:20.417325 | orchestrator | Sunday 13 April 2025 00:57:56 +0000 (0:00:00.472) 0:00:11.989 ********** 2025-04-13 00:59:20.417338 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.417352 | orchestrator | 2025-04-13 00:59:20.417366 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-13 00:59:20.417379 | orchestrator | Sunday 13 April 2025 00:57:56 +0000 (0:00:00.126) 0:00:12.115 ********** 2025-04-13 00:59:20.417393 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.417407 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:59:20.417421 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:59:20.417434 | orchestrator | 2025-04-13 00:59:20.417448 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-13 00:59:20.417462 | orchestrator | Sunday 13 April 2025 00:57:57 +0000 (0:00:00.760) 0:00:12.876 ********** 2025-04-13 00:59:20.417475 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:59:20.417489 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:59:20.417503 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:59:20.417527 | orchestrator | 2025-04-13 00:59:20.417541 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-13 00:59:20.417555 | orchestrator | Sunday 13 April 2025 00:57:57 +0000 (0:00:00.529) 0:00:13.405 ********** 2025-04-13 00:59:20.417569 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.417582 | orchestrator | 2025-04-13 00:59:20.417596 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-13 00:59:20.417610 | orchestrator | Sunday 13 April 2025 00:57:57 +0000 (0:00:00.115) 0:00:13.521 ********** 2025-04-13 00:59:20.417624 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.417637 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:59:20.417653 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:59:20.417669 | orchestrator | 2025-04-13 00:59:20.417684 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-13 00:59:20.417699 | orchestrator | Sunday 13 April 2025 00:57:58 +0000 (0:00:00.288) 0:00:13.809 ********** 2025-04-13 00:59:20.417715 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:59:20.417730 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:59:20.417746 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:59:20.417761 | orchestrator | 2025-04-13 00:59:20.417777 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-13 00:59:20.417793 | orchestrator | Sunday 13 April 2025 00:57:58 +0000 (0:00:00.447) 0:00:14.257 ********** 2025-04-13 00:59:20.417806 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.417820 | orchestrator | 2025-04-13 00:59:20.417834 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-13 00:59:20.417847 | orchestrator | Sunday 13 April 2025 00:57:58 +0000 (0:00:00.104) 0:00:14.362 ********** 2025-04-13 00:59:20.417861 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.417875 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:59:20.417898 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:59:20.417913 | orchestrator | 2025-04-13 00:59:20.417927 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-13 00:59:20.417941 | orchestrator | Sunday 13 April 2025 00:57:59 +0000 (0:00:00.461) 0:00:14.823 ********** 2025-04-13 00:59:20.417955 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:59:20.417970 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:59:20.417984 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:59:20.417997 | orchestrator | 2025-04-13 00:59:20.418012 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-13 00:59:20.418058 | orchestrator | Sunday 13 April 2025 00:57:59 +0000 (0:00:00.484) 0:00:15.308 ********** 2025-04-13 00:59:20.418072 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.418086 | orchestrator | 2025-04-13 00:59:20.418100 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-13 00:59:20.418113 | orchestrator | Sunday 13 April 2025 00:57:59 +0000 (0:00:00.132) 0:00:15.441 ********** 2025-04-13 00:59:20.418158 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.418173 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:59:20.418187 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:59:20.418200 | orchestrator | 2025-04-13 00:59:20.418214 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-13 00:59:20.418228 | orchestrator | Sunday 13 April 2025 00:58:00 +0000 (0:00:00.643) 0:00:16.084 ********** 2025-04-13 00:59:20.418242 | orchestrator | ok: [testbed-node-0] 2025-04-13 00:59:20.418255 | orchestrator | ok: [testbed-node-1] 2025-04-13 00:59:20.418269 | orchestrator | ok: [testbed-node-2] 2025-04-13 00:59:20.418283 | orchestrator | 2025-04-13 00:59:20.418302 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-13 00:59:20.418316 | orchestrator | Sunday 13 April 2025 00:58:00 +0000 (0:00:00.470) 0:00:16.555 ********** 2025-04-13 00:59:20.418330 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.418344 | orchestrator | 2025-04-13 00:59:20.418500 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-13 00:59:20.418518 | orchestrator | Sunday 13 April 2025 00:58:01 +0000 (0:00:00.215) 0:00:16.771 ********** 2025-04-13 00:59:20.418548 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.418562 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:59:20.418576 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:59:20.418590 | orchestrator | 2025-04-13 00:59:20.418604 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-04-13 00:59:20.418618 | orchestrator | Sunday 13 April 2025 00:58:01 +0000 (0:00:00.574) 0:00:17.346 ********** 2025-04-13 00:59:20.418632 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:59:20.418646 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:59:20.418659 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:59:20.418673 | orchestrator | 2025-04-13 00:59:20.418687 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-04-13 00:59:20.418701 | orchestrator | Sunday 13 April 2025 00:58:04 +0000 (0:00:02.978) 0:00:20.324 ********** 2025-04-13 00:59:20.418714 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-04-13 00:59:20.418736 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-04-13 00:59:20.418751 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-04-13 00:59:20.418765 | orchestrator | 2025-04-13 00:59:20.418779 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-04-13 00:59:20.418799 | orchestrator | Sunday 13 April 2025 00:58:08 +0000 (0:00:03.468) 0:00:23.792 ********** 2025-04-13 00:59:20.418814 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-04-13 00:59:20.418829 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-04-13 00:59:20.418843 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-04-13 00:59:20.418856 | orchestrator | 2025-04-13 00:59:20.418870 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-04-13 00:59:20.418884 | orchestrator | Sunday 13 April 2025 00:58:11 +0000 (0:00:03.473) 0:00:27.266 ********** 2025-04-13 00:59:20.418897 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-04-13 00:59:20.418911 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-04-13 00:59:20.418924 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-04-13 00:59:20.418938 | orchestrator | 2025-04-13 00:59:20.418952 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-04-13 00:59:20.418966 | orchestrator | Sunday 13 April 2025 00:58:13 +0000 (0:00:02.206) 0:00:29.473 ********** 2025-04-13 00:59:20.418979 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.418993 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:59:20.419007 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:59:20.419020 | orchestrator | 2025-04-13 00:59:20.419034 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-04-13 00:59:20.419048 | orchestrator | Sunday 13 April 2025 00:58:14 +0000 (0:00:00.296) 0:00:29.770 ********** 2025-04-13 00:59:20.419061 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.419075 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:59:20.419089 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:59:20.419103 | orchestrator | 2025-04-13 00:59:20.419117 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-13 00:59:20.419156 | orchestrator | Sunday 13 April 2025 00:58:14 +0000 (0:00:00.421) 0:00:30.191 ********** 2025-04-13 00:59:20.419171 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:59:20.419187 | orchestrator | 2025-04-13 00:59:20.419202 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-04-13 00:59:20.419225 | orchestrator | Sunday 13 April 2025 00:58:15 +0000 (0:00:00.661) 0:00:30.853 ********** 2025-04-13 00:59:20.419249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-13 00:59:20.419267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-13 00:59:20.419299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-13 00:59:20.419315 | orchestrator | 2025-04-13 00:59:20.419329 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-04-13 00:59:20.419343 | orchestrator | Sunday 13 April 2025 00:58:16 +0000 (0:00:01.675) 0:00:32.529 ********** 2025-04-13 00:59:20.419358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-13 00:59:20.419380 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.419403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-13 00:59:20.419419 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:59:20.419434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-13 00:59:20.419456 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:59:20.419470 | orchestrator | 2025-04-13 00:59:20.419484 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-04-13 00:59:20.419498 | orchestrator | Sunday 13 April 2025 00:58:17 +0000 (0:00:00.882) 0:00:33.411 ********** 2025-04-13 00:59:20.419520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-13 00:59:20.419536 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.419550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-13 00:59:20.419573 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:59:20.419602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-13 00:59:20.419625 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:59:20.419639 | orchestrator | 2025-04-13 00:59:20.419652 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-04-13 00:59:20.419666 | orchestrator | Sunday 13 April 2025 00:58:18 +0000 (0:00:01.087) 0:00:34.499 ********** 2025-04-13 00:59:20.419687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-13 00:59:20.419704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-13 00:59:20.419733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-13 00:59:20.419749 | orchestrator | 2025-04-13 00:59:20.419763 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-13 00:59:20.419777 | orchestrator | Sunday 13 April 2025 00:58:24 +0000 (0:00:05.277) 0:00:39.776 ********** 2025-04-13 00:59:20.419791 | orchestrator | skipping: [testbed-node-0] 2025-04-13 00:59:20.419805 | orchestrator | skipping: [testbed-node-1] 2025-04-13 00:59:20.419818 | orchestrator | skipping: [testbed-node-2] 2025-04-13 00:59:20.419832 | orchestrator | 2025-04-13 00:59:20.419846 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-13 00:59:20.419859 | orchestrator | Sunday 13 April 2025 00:58:24 +0000 (0:00:00.418) 0:00:40.195 ********** 2025-04-13 00:59:20.419873 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 00:59:20.419887 | orchestrator | 2025-04-13 00:59:20.419901 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-04-13 00:59:20.419921 | orchestrator | Sunday 13 April 2025 00:58:25 +0000 (0:00:00.616) 0:00:40.811 ********** 2025-04-13 00:59:20.419935 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:59:20.419949 | orchestrator | 2025-04-13 00:59:20.419962 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-04-13 00:59:20.419976 | orchestrator | Sunday 13 April 2025 00:58:27 +0000 (0:00:02.521) 0:00:43.332 ********** 2025-04-13 00:59:20.419990 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:59:20.420004 | orchestrator | 2025-04-13 00:59:20.420017 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-04-13 00:59:20.420031 | orchestrator | Sunday 13 April 2025 00:58:30 +0000 (0:00:02.376) 0:00:45.709 ********** 2025-04-13 00:59:20.420044 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:59:20.420058 | orchestrator | 2025-04-13 00:59:20.420071 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-04-13 00:59:20.420085 | orchestrator | Sunday 13 April 2025 00:58:43 +0000 (0:00:13.084) 0:00:58.794 ********** 2025-04-13 00:59:20.420098 | orchestrator | 2025-04-13 00:59:20.420112 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-04-13 00:59:20.420143 | orchestrator | Sunday 13 April 2025 00:58:43 +0000 (0:00:00.059) 0:00:58.853 ********** 2025-04-13 00:59:20.420157 | orchestrator | 2025-04-13 00:59:20.420171 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-04-13 00:59:20.420185 | orchestrator | Sunday 13 April 2025 00:58:43 +0000 (0:00:00.194) 0:00:59.048 ********** 2025-04-13 00:59:20.420199 | orchestrator | 2025-04-13 00:59:20.420213 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-04-13 00:59:20.420226 | orchestrator | Sunday 13 April 2025 00:58:43 +0000 (0:00:00.058) 0:00:59.106 ********** 2025-04-13 00:59:20.420240 | orchestrator | changed: [testbed-node-0] 2025-04-13 00:59:20.420254 | orchestrator | changed: [testbed-node-1] 2025-04-13 00:59:20.420267 | orchestrator | changed: [testbed-node-2] 2025-04-13 00:59:20.420281 | orchestrator | 2025-04-13 00:59:20.420295 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:59:20.420309 | orchestrator | testbed-node-0 : ok=39  changed=11  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-04-13 00:59:20.420323 | orchestrator | testbed-node-1 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-04-13 00:59:20.420337 | orchestrator | testbed-node-2 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-04-13 00:59:20.420351 | orchestrator | 2025-04-13 00:59:20.420364 | orchestrator | 2025-04-13 00:59:20.420378 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 00:59:20.420392 | orchestrator | Sunday 13 April 2025 00:59:19 +0000 (0:00:36.560) 0:01:35.667 ********** 2025-04-13 00:59:20.420405 | orchestrator | =============================================================================== 2025-04-13 00:59:20.420419 | orchestrator | horizon : Restart horizon container ------------------------------------ 36.56s 2025-04-13 00:59:20.420433 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 13.08s 2025-04-13 00:59:20.420447 | orchestrator | horizon : Deploy horizon container -------------------------------------- 5.28s 2025-04-13 00:59:20.420461 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 3.47s 2025-04-13 00:59:20.420474 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 3.47s 2025-04-13 00:59:20.420488 | orchestrator | horizon : Copying over config.json files for services ------------------- 2.98s 2025-04-13 00:59:20.420502 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.52s 2025-04-13 00:59:20.420515 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.38s 2025-04-13 00:59:20.420529 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.21s 2025-04-13 00:59:20.420552 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.73s 2025-04-13 00:59:20.420577 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.68s 2025-04-13 00:59:20.420603 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.09s 2025-04-13 00:59:20.420629 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.97s 2025-04-13 00:59:20.420663 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.88s 2025-04-13 00:59:23.458545 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.80s 2025-04-13 00:59:23.458685 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.76s 2025-04-13 00:59:23.458705 | orchestrator | horizon : Update policy file name --------------------------------------- 0.67s 2025-04-13 00:59:23.458720 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.66s 2025-04-13 00:59:23.458734 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.64s 2025-04-13 00:59:23.458748 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.62s 2025-04-13 00:59:23.458763 | orchestrator | 2025-04-13 00:59:20 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:59:23.458778 | orchestrator | 2025-04-13 00:59:20 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:59:23.458793 | orchestrator | 2025-04-13 00:59:20 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:59:23.458806 | orchestrator | 2025-04-13 00:59:20 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:59:23.458837 | orchestrator | 2025-04-13 00:59:23 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:59:23.461068 | orchestrator | 2025-04-13 00:59:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:59:23.462750 | orchestrator | 2025-04-13 00:59:23 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:59:23.462947 | orchestrator | 2025-04-13 00:59:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:59:26.515371 | orchestrator | 2025-04-13 00:59:26 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:59:26.520707 | orchestrator | 2025-04-13 00:59:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:59:26.526207 | orchestrator | 2025-04-13 00:59:26 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:59:29.569738 | orchestrator | 2025-04-13 00:59:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:59:29.569902 | orchestrator | 2025-04-13 00:59:29 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:59:29.571290 | orchestrator | 2025-04-13 00:59:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:59:29.573440 | orchestrator | 2025-04-13 00:59:29 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:59:32.612037 | orchestrator | 2025-04-13 00:59:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:59:32.612317 | orchestrator | 2025-04-13 00:59:32 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:59:32.613020 | orchestrator | 2025-04-13 00:59:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:59:32.614417 | orchestrator | 2025-04-13 00:59:32 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:59:35.666621 | orchestrator | 2025-04-13 00:59:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:59:35.666801 | orchestrator | 2025-04-13 00:59:35 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:59:35.668229 | orchestrator | 2025-04-13 00:59:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:59:35.670206 | orchestrator | 2025-04-13 00:59:35 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:59:38.717506 | orchestrator | 2025-04-13 00:59:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:59:38.717786 | orchestrator | 2025-04-13 00:59:38 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:59:38.718635 | orchestrator | 2025-04-13 00:59:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:59:38.718699 | orchestrator | 2025-04-13 00:59:38 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state STARTED 2025-04-13 00:59:41.771907 | orchestrator | 2025-04-13 00:59:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:59:41.772155 | orchestrator | 2025-04-13 00:59:41 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:59:41.773806 | orchestrator | 2025-04-13 00:59:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:59:41.776294 | orchestrator | 2025-04-13 00:59:41 | INFO  | Task 138c0de5-713a-4c81-924d-786fe0e84232 is in state SUCCESS 2025-04-13 00:59:41.778234 | orchestrator | 2025-04-13 00:59:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:59:41.778378 | orchestrator | 2025-04-13 00:59:41.778637 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-13 00:59:41.778658 | orchestrator | 2025-04-13 00:59:41.778674 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-04-13 00:59:41.778688 | orchestrator | 2025-04-13 00:59:41.778703 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-04-13 00:59:41.778718 | orchestrator | Sunday 13 April 2025 00:57:30 +0000 (0:00:01.186) 0:00:01.186 ********** 2025-04-13 00:59:41.778733 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:59:41.778768 | orchestrator | 2025-04-13 00:59:41.778783 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-04-13 00:59:41.778796 | orchestrator | Sunday 13 April 2025 00:57:31 +0000 (0:00:00.519) 0:00:01.706 ********** 2025-04-13 00:59:41.778811 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-04-13 00:59:41.778825 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-04-13 00:59:41.778839 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-04-13 00:59:41.778853 | orchestrator | 2025-04-13 00:59:41.778866 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-04-13 00:59:41.778880 | orchestrator | Sunday 13 April 2025 00:57:32 +0000 (0:00:00.855) 0:00:02.561 ********** 2025-04-13 00:59:41.778894 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:59:41.778908 | orchestrator | 2025-04-13 00:59:41.778922 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-04-13 00:59:41.778935 | orchestrator | Sunday 13 April 2025 00:57:32 +0000 (0:00:00.690) 0:00:03.252 ********** 2025-04-13 00:59:41.778949 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:59:41.778964 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:59:41.778977 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:59:41.778991 | orchestrator | 2025-04-13 00:59:41.779005 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-04-13 00:59:41.779019 | orchestrator | Sunday 13 April 2025 00:57:33 +0000 (0:00:00.601) 0:00:03.853 ********** 2025-04-13 00:59:41.779033 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:59:41.779073 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:59:41.779087 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:59:41.779101 | orchestrator | 2025-04-13 00:59:41.779115 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-04-13 00:59:41.779171 | orchestrator | Sunday 13 April 2025 00:57:33 +0000 (0:00:00.282) 0:00:04.136 ********** 2025-04-13 00:59:41.779185 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:59:41.779199 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:59:41.779213 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:59:41.779226 | orchestrator | 2025-04-13 00:59:41.779240 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-04-13 00:59:41.779257 | orchestrator | Sunday 13 April 2025 00:57:34 +0000 (0:00:00.861) 0:00:04.998 ********** 2025-04-13 00:59:41.779272 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:59:41.779288 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:59:41.779321 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:59:41.779336 | orchestrator | 2025-04-13 00:59:41.779351 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-04-13 00:59:41.779367 | orchestrator | Sunday 13 April 2025 00:57:34 +0000 (0:00:00.313) 0:00:05.311 ********** 2025-04-13 00:59:41.779383 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:59:41.779398 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:59:41.779413 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:59:41.779428 | orchestrator | 2025-04-13 00:59:41.779444 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-04-13 00:59:41.779459 | orchestrator | Sunday 13 April 2025 00:57:35 +0000 (0:00:00.326) 0:00:05.638 ********** 2025-04-13 00:59:41.779473 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:59:41.779489 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:59:41.779504 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:59:41.779520 | orchestrator | 2025-04-13 00:59:41.779535 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-04-13 00:59:41.779551 | orchestrator | Sunday 13 April 2025 00:57:35 +0000 (0:00:00.317) 0:00:05.955 ********** 2025-04-13 00:59:41.779566 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.779583 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.779599 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.779615 | orchestrator | 2025-04-13 00:59:41.779628 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-04-13 00:59:41.779642 | orchestrator | Sunday 13 April 2025 00:57:36 +0000 (0:00:00.526) 0:00:06.481 ********** 2025-04-13 00:59:41.779655 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:59:41.779669 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:59:41.779682 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:59:41.779696 | orchestrator | 2025-04-13 00:59:41.779710 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-04-13 00:59:41.779724 | orchestrator | Sunday 13 April 2025 00:57:36 +0000 (0:00:00.288) 0:00:06.770 ********** 2025-04-13 00:59:41.779737 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-13 00:59:41.779756 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-13 00:59:41.779770 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-13 00:59:41.779784 | orchestrator | 2025-04-13 00:59:41.779798 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-04-13 00:59:41.779811 | orchestrator | Sunday 13 April 2025 00:57:37 +0000 (0:00:00.742) 0:00:07.513 ********** 2025-04-13 00:59:41.779825 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:59:41.779838 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:59:41.779852 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:59:41.779865 | orchestrator | 2025-04-13 00:59:41.779879 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-04-13 00:59:41.779892 | orchestrator | Sunday 13 April 2025 00:57:37 +0000 (0:00:00.527) 0:00:08.040 ********** 2025-04-13 00:59:41.779915 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-13 00:59:41.779957 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-13 00:59:41.779972 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-13 00:59:41.779986 | orchestrator | 2025-04-13 00:59:41.780000 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-04-13 00:59:41.780014 | orchestrator | Sunday 13 April 2025 00:57:40 +0000 (0:00:02.428) 0:00:10.469 ********** 2025-04-13 00:59:41.780028 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-13 00:59:41.780042 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-13 00:59:41.780055 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-13 00:59:41.780069 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.780083 | orchestrator | 2025-04-13 00:59:41.780097 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-04-13 00:59:41.780110 | orchestrator | Sunday 13 April 2025 00:57:40 +0000 (0:00:00.467) 0:00:10.937 ********** 2025-04-13 00:59:41.780146 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-04-13 00:59:41.780176 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-04-13 00:59:41.780191 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-04-13 00:59:41.780205 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.780233 | orchestrator | 2025-04-13 00:59:41.780248 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-04-13 00:59:41.780262 | orchestrator | Sunday 13 April 2025 00:57:41 +0000 (0:00:00.674) 0:00:11.611 ********** 2025-04-13 00:59:41.780281 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-13 00:59:41.780297 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-13 00:59:41.780312 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-13 00:59:41.780326 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.780339 | orchestrator | 2025-04-13 00:59:41.780353 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-04-13 00:59:41.780367 | orchestrator | Sunday 13 April 2025 00:57:41 +0000 (0:00:00.175) 0:00:11.787 ********** 2025-04-13 00:59:41.780382 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '181935c7d3e1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-04-13 00:57:38.509121', 'end': '2025-04-13 00:57:38.557533', 'delta': '0:00:00.048412', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['181935c7d3e1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-04-13 00:59:41.780421 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '179a905db4fc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-04-13 00:57:39.123957', 'end': '2025-04-13 00:57:39.161657', 'delta': '0:00:00.037700', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['179a905db4fc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-04-13 00:59:41.780439 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '6fda53730048', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-04-13 00:57:39.687839', 'end': '2025-04-13 00:57:39.730524', 'delta': '0:00:00.042685', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6fda53730048'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-04-13 00:59:41.780454 | orchestrator | 2025-04-13 00:59:41.780469 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-04-13 00:59:41.780482 | orchestrator | Sunday 13 April 2025 00:57:41 +0000 (0:00:00.201) 0:00:11.988 ********** 2025-04-13 00:59:41.780496 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:59:41.780510 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:59:41.780524 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:59:41.780538 | orchestrator | 2025-04-13 00:59:41.780552 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-04-13 00:59:41.780566 | orchestrator | Sunday 13 April 2025 00:57:42 +0000 (0:00:00.468) 0:00:12.457 ********** 2025-04-13 00:59:41.780580 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-04-13 00:59:41.780594 | orchestrator | 2025-04-13 00:59:41.780607 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-04-13 00:59:41.780621 | orchestrator | Sunday 13 April 2025 00:57:43 +0000 (0:00:01.331) 0:00:13.788 ********** 2025-04-13 00:59:41.780635 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.780649 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.780662 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.780676 | orchestrator | 2025-04-13 00:59:41.780690 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-04-13 00:59:41.780704 | orchestrator | Sunday 13 April 2025 00:57:43 +0000 (0:00:00.512) 0:00:14.300 ********** 2025-04-13 00:59:41.780717 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.780731 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.780745 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.780758 | orchestrator | 2025-04-13 00:59:41.780772 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-13 00:59:41.780786 | orchestrator | Sunday 13 April 2025 00:57:44 +0000 (0:00:00.448) 0:00:14.748 ********** 2025-04-13 00:59:41.780807 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.780821 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.780834 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.780848 | orchestrator | 2025-04-13 00:59:41.780862 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-04-13 00:59:41.780875 | orchestrator | Sunday 13 April 2025 00:57:44 +0000 (0:00:00.303) 0:00:15.052 ********** 2025-04-13 00:59:41.780889 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:59:41.780903 | orchestrator | 2025-04-13 00:59:41.780916 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-04-13 00:59:41.780930 | orchestrator | Sunday 13 April 2025 00:57:44 +0000 (0:00:00.120) 0:00:15.173 ********** 2025-04-13 00:59:41.780944 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.780958 | orchestrator | 2025-04-13 00:59:41.780971 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-13 00:59:41.780990 | orchestrator | Sunday 13 April 2025 00:57:44 +0000 (0:00:00.240) 0:00:15.413 ********** 2025-04-13 00:59:41.781004 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.781018 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.781032 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.781045 | orchestrator | 2025-04-13 00:59:41.781059 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-04-13 00:59:41.781073 | orchestrator | Sunday 13 April 2025 00:57:45 +0000 (0:00:00.558) 0:00:15.972 ********** 2025-04-13 00:59:41.781086 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.781100 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.781114 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.781192 | orchestrator | 2025-04-13 00:59:41.781207 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-04-13 00:59:41.781221 | orchestrator | Sunday 13 April 2025 00:57:45 +0000 (0:00:00.323) 0:00:16.295 ********** 2025-04-13 00:59:41.781235 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.781249 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.781263 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.781276 | orchestrator | 2025-04-13 00:59:41.781290 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-04-13 00:59:41.781304 | orchestrator | Sunday 13 April 2025 00:57:46 +0000 (0:00:00.378) 0:00:16.674 ********** 2025-04-13 00:59:41.781318 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.781333 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.781354 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.781368 | orchestrator | 2025-04-13 00:59:41.781383 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-04-13 00:59:41.781397 | orchestrator | Sunday 13 April 2025 00:57:46 +0000 (0:00:00.371) 0:00:17.046 ********** 2025-04-13 00:59:41.781410 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.781424 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.781438 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.781452 | orchestrator | 2025-04-13 00:59:41.781465 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-04-13 00:59:41.781479 | orchestrator | Sunday 13 April 2025 00:57:47 +0000 (0:00:00.582) 0:00:17.628 ********** 2025-04-13 00:59:41.781493 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.781507 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.781521 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.781540 | orchestrator | 2025-04-13 00:59:41.781555 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-04-13 00:59:41.781568 | orchestrator | Sunday 13 April 2025 00:57:47 +0000 (0:00:00.345) 0:00:17.974 ********** 2025-04-13 00:59:41.781582 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.781596 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.781610 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.781624 | orchestrator | 2025-04-13 00:59:41.781638 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-04-13 00:59:41.781659 | orchestrator | Sunday 13 April 2025 00:57:47 +0000 (0:00:00.315) 0:00:18.290 ********** 2025-04-13 00:59:41.781674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2045bad1--ab77--5a33--981a--e42fb4136085-osd--block--2045bad1--ab77--5a33--981a--e42fb4136085', 'dm-uuid-LVM-9ClZghmJtxOPX1O0zOX2WtCXvawwZfDy7wBl25fdepsNrLXd7sjUWlLK1N9BRuwM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.781690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--075038e7--2b9c--5de1--9fc0--4ab80f908b26-osd--block--075038e7--2b9c--5de1--9fc0--4ab80f908b26', 'dm-uuid-LVM-ijdtEhTChvVxavxMfY9fKDsMZwQKU6xtDJWHGcfUiA0AHJDZ056L3ZFkBJcDFDjX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.781703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.781716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.781729 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.781742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.781761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.781778 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.781798 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.781811 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a50ad019--9a42--5399--96dd--0ec75fe99929-osd--block--a50ad019--9a42--5399--96dd--0ec75fe99929', 'dm-uuid-LVM-0MGJ4no5hg7d09lOzjNoAU8ORU59dmPsJyAr8ZQr8cP5sKdIDED1qCrvMRfzesIu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.781824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.781846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099', 'scsi-SQEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099-part1', 'scsi-SQEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099-part14', 'scsi-SQEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099-part15', 'scsi-SQEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099-part16', 'scsi-SQEMU_QEMU_HARDDISK_f620be22-b7d1-409f-9583-d71db6137099-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:59:41.781861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c1aa12de--f4f1--5fa1--83b9--2c9c84fd1e23-osd--block--c1aa12de--f4f1--5fa1--83b9--2c9c84fd1e23', 'dm-uuid-LVM-fcrvKvvG1tbWkSLXlca50ispeFKUupGEQEdmdc0FRNe91iBPAGIWkZVduBCSKi30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.781882 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2045bad1--ab77--5a33--981a--e42fb4136085-osd--block--2045bad1--ab77--5a33--981a--e42fb4136085'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Wg7Rcb-fdKY-KXS7-TPfC-U0vO-eHnO-jchBgv', 'scsi-0QEMU_QEMU_HARDDISK_d62d4166-25a1-4741-94fc-59c78379b097', 'scsi-SQEMU_QEMU_HARDDISK_d62d4166-25a1-4741-94fc-59c78379b097'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:59:41.781897 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.781910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--075038e7--2b9c--5de1--9fc0--4ab80f908b26-osd--block--075038e7--2b9c--5de1--9fc0--4ab80f908b26'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z8bkSt-YrWX-zbEK-9ciE-YDhx-WB78-xQG7ZG', 'scsi-0QEMU_QEMU_HARDDISK_24d70fc8-7961-4caf-9f39-267d5072f1bc', 'scsi-SQEMU_QEMU_HARDDISK_24d70fc8-7961-4caf-9f39-267d5072f1bc'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:59:41.781923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3f4097-e1b2-4e0f-b572-2003c7cd8b15', 'scsi-SQEMU_QEMU_HARDDISK_bd3f4097-e1b2-4e0f-b572-2003c7cd8b15'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:59:41.783469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.783507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-13-00-02-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:59:41.783522 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.783556 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.783569 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.783582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.783595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.783615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.783628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.783651 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7', 'scsi-SQEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7-part1', 'scsi-SQEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7-part14', 'scsi-SQEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7-part15', 'scsi-SQEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7-part16', 'scsi-SQEMU_QEMU_HARDDISK_bd1b3b5b-24e0-4b83-98ac-551986a77df7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:59:41.783673 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a50ad019--9a42--5399--96dd--0ec75fe99929-osd--block--a50ad019--9a42--5399--96dd--0ec75fe99929'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-exD8So-0SKp-0Ku2-66L3-4IzZ-cVpj-7Vw8bQ', 'scsi-0QEMU_QEMU_HARDDISK_a0e179ac-f513-4bce-8698-5c5d77bb97a6', 'scsi-SQEMU_QEMU_HARDDISK_a0e179ac-f513-4bce-8698-5c5d77bb97a6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:59:41.783686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c75c5404--ac9a--5ffa--97a7--d9feeb5e7a2a-osd--block--c75c5404--ac9a--5ffa--97a7--d9feeb5e7a2a', 'dm-uuid-LVM-YxZVTg6p9WxxiVJ4KPLhGhHhq40mwRUjoroj3FrCb42cpkulySnmKq0DGWrlWzuP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.783697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c1aa12de--f4f1--5fa1--83b9--2c9c84fd1e23-osd--block--c1aa12de--f4f1--5fa1--83b9--2c9c84fd1e23'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sVOc6A-lmOP-2cez-e17H-BIO7-pUke-8KbMpp', 'scsi-0QEMU_QEMU_HARDDISK_aad8aa45-f541-429b-bfb0-28cd3fbd229c', 'scsi-SQEMU_QEMU_HARDDISK_aad8aa45-f541-429b-bfb0-28cd3fbd229c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:59:41.783709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea334510-65a0-4c82-ab7f-212ffba0ceeb', 'scsi-SQEMU_QEMU_HARDDISK_ea334510-65a0-4c82-ab7f-212ffba0ceeb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:59:41.783725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cc16a9be--1c89--5ed3--8c34--f79b9c168598-osd--block--cc16a9be--1c89--5ed3--8c34--f79b9c168598', 'dm-uuid-LVM-3EJewZS2mDPacCqo8O8bhWXwBTUvAhMqy1z6rgd9gwK4f900LkMeiV7yeuPbN5zE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.783741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.783751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.783762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-13-00-02-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:59:41.783777 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.783788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.783798 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.783808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.783818 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.783829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.783849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 00:59:41.783870 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8', 'scsi-SQEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd403a5d-f47e-4cc0-967a-066b990b05e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:59:41.783885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c75c5404--ac9a--5ffa--97a7--d9feeb5e7a2a-osd--block--c75c5404--ac9a--5ffa--97a7--d9feeb5e7a2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YqhesG-X622-ppI0-oBRQ-6rJ0-L1CB-dky6fD', 'scsi-0QEMU_QEMU_HARDDISK_15f38305-5d3a-4a2a-94a9-ec4f360f12f0', 'scsi-SQEMU_QEMU_HARDDISK_15f38305-5d3a-4a2a-94a9-ec4f360f12f0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:59:41.783896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--cc16a9be--1c89--5ed3--8c34--f79b9c168598-osd--block--cc16a9be--1c89--5ed3--8c34--f79b9c168598'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IpkCNw-rEwG-L006-2kPo-Gqut-ZuOO-dqDdm9', 'scsi-0QEMU_QEMU_HARDDISK_466f66ff-268f-471d-abe8-9f0f353ab0cc', 'scsi-SQEMU_QEMU_HARDDISK_466f66ff-268f-471d-abe8-9f0f353ab0cc'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:59:41.783922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d771f52a-9ada-4427-8de2-0003eafe1256', 'scsi-SQEMU_QEMU_HARDDISK_d771f52a-9ada-4427-8de2-0003eafe1256'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:59:41.783934 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-13-00-02-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 00:59:41.783944 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.783955 | orchestrator | 2025-04-13 00:59:41.783965 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-04-13 00:59:41.783976 | orchestrator | Sunday 13 April 2025 00:57:48 +0000 (0:00:00.580) 0:00:18.870 ********** 2025-04-13 00:59:41.783986 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-04-13 00:59:41.783996 | orchestrator | 2025-04-13 00:59:41.784006 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-04-13 00:59:41.784016 | orchestrator | Sunday 13 April 2025 00:57:49 +0000 (0:00:01.533) 0:00:20.403 ********** 2025-04-13 00:59:41.784026 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:59:41.784036 | orchestrator | 2025-04-13 00:59:41.784047 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-04-13 00:59:41.784057 | orchestrator | Sunday 13 April 2025 00:57:50 +0000 (0:00:00.154) 0:00:20.558 ********** 2025-04-13 00:59:41.784067 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:59:41.784077 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:59:41.784087 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:59:41.784097 | orchestrator | 2025-04-13 00:59:41.784108 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-04-13 00:59:41.784140 | orchestrator | Sunday 13 April 2025 00:57:50 +0000 (0:00:00.386) 0:00:20.944 ********** 2025-04-13 00:59:41.784151 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:59:41.784161 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:59:41.784171 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:59:41.784181 | orchestrator | 2025-04-13 00:59:41.784191 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-04-13 00:59:41.784201 | orchestrator | Sunday 13 April 2025 00:57:51 +0000 (0:00:00.679) 0:00:21.624 ********** 2025-04-13 00:59:41.784210 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:59:41.784221 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:59:41.784231 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:59:41.784241 | orchestrator | 2025-04-13 00:59:41.784251 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-13 00:59:41.784261 | orchestrator | Sunday 13 April 2025 00:57:51 +0000 (0:00:00.294) 0:00:21.918 ********** 2025-04-13 00:59:41.784271 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:59:41.784281 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:59:41.784291 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:59:41.784301 | orchestrator | 2025-04-13 00:59:41.784311 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-13 00:59:41.784321 | orchestrator | Sunday 13 April 2025 00:57:52 +0000 (0:00:00.872) 0:00:22.791 ********** 2025-04-13 00:59:41.784331 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.784348 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.784359 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.784369 | orchestrator | 2025-04-13 00:59:41.784379 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-13 00:59:41.784389 | orchestrator | Sunday 13 April 2025 00:57:52 +0000 (0:00:00.308) 0:00:23.099 ********** 2025-04-13 00:59:41.784399 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.784409 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.784419 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.784429 | orchestrator | 2025-04-13 00:59:41.784439 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-13 00:59:41.784450 | orchestrator | Sunday 13 April 2025 00:57:53 +0000 (0:00:00.465) 0:00:23.565 ********** 2025-04-13 00:59:41.784460 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.784470 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.784480 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.784490 | orchestrator | 2025-04-13 00:59:41.784500 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-04-13 00:59:41.784510 | orchestrator | Sunday 13 April 2025 00:57:53 +0000 (0:00:00.332) 0:00:23.897 ********** 2025-04-13 00:59:41.784520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-13 00:59:41.784530 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-13 00:59:41.784541 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-13 00:59:41.784551 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-13 00:59:41.784565 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-13 00:59:41.784576 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.784586 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-13 00:59:41.784596 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-13 00:59:41.784606 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.784616 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-13 00:59:41.784626 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-13 00:59:41.784636 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.784646 | orchestrator | 2025-04-13 00:59:41.784656 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-04-13 00:59:41.784671 | orchestrator | Sunday 13 April 2025 00:57:54 +0000 (0:00:01.127) 0:00:25.025 ********** 2025-04-13 00:59:41.784681 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-13 00:59:41.784691 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-13 00:59:41.784701 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-13 00:59:41.784711 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-13 00:59:41.784721 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-13 00:59:41.784731 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-13 00:59:41.784741 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.784751 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-13 00:59:41.784761 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.784771 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-13 00:59:41.784781 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-13 00:59:41.784791 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.784800 | orchestrator | 2025-04-13 00:59:41.784811 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-04-13 00:59:41.784820 | orchestrator | Sunday 13 April 2025 00:57:55 +0000 (0:00:00.875) 0:00:25.901 ********** 2025-04-13 00:59:41.784831 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-04-13 00:59:41.784840 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-04-13 00:59:41.784850 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-04-13 00:59:41.784867 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-04-13 00:59:41.784877 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-04-13 00:59:41.784887 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-04-13 00:59:41.784897 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-04-13 00:59:41.785109 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-04-13 00:59:41.785143 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-04-13 00:59:41.785153 | orchestrator | 2025-04-13 00:59:41.785164 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-04-13 00:59:41.785174 | orchestrator | Sunday 13 April 2025 00:57:57 +0000 (0:00:02.130) 0:00:28.032 ********** 2025-04-13 00:59:41.785184 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-13 00:59:41.785194 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-13 00:59:41.785204 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-13 00:59:41.785214 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.785224 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-13 00:59:41.785234 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-13 00:59:41.785244 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-13 00:59:41.785254 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.785264 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-13 00:59:41.785274 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-13 00:59:41.785283 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-13 00:59:41.785293 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.785303 | orchestrator | 2025-04-13 00:59:41.785313 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-04-13 00:59:41.785323 | orchestrator | Sunday 13 April 2025 00:57:58 +0000 (0:00:00.632) 0:00:28.664 ********** 2025-04-13 00:59:41.785333 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-13 00:59:41.785343 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-13 00:59:41.785353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-13 00:59:41.785363 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-13 00:59:41.785373 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-13 00:59:41.785383 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-13 00:59:41.785393 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.785403 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.785413 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-13 00:59:41.785423 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-13 00:59:41.785433 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-13 00:59:41.785442 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.785452 | orchestrator | 2025-04-13 00:59:41.785463 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-04-13 00:59:41.785472 | orchestrator | Sunday 13 April 2025 00:57:58 +0000 (0:00:00.412) 0:00:29.077 ********** 2025-04-13 00:59:41.785482 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-13 00:59:41.785493 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-13 00:59:41.785503 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-13 00:59:41.785513 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.785523 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-13 00:59:41.785534 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-13 00:59:41.785544 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-13 00:59:41.785560 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.785570 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-13 00:59:41.785585 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-13 00:59:41.785596 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-13 00:59:41.785606 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.785616 | orchestrator | 2025-04-13 00:59:41.785627 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-04-13 00:59:41.785637 | orchestrator | Sunday 13 April 2025 00:57:59 +0000 (0:00:00.409) 0:00:29.486 ********** 2025-04-13 00:59:41.785647 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 00:59:41.785657 | orchestrator | 2025-04-13 00:59:41.785667 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-13 00:59:41.785678 | orchestrator | Sunday 13 April 2025 00:57:59 +0000 (0:00:00.771) 0:00:30.258 ********** 2025-04-13 00:59:41.785688 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.785698 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.785708 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.785717 | orchestrator | 2025-04-13 00:59:41.785727 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-13 00:59:41.785737 | orchestrator | Sunday 13 April 2025 00:58:00 +0000 (0:00:00.425) 0:00:30.683 ********** 2025-04-13 00:59:41.785747 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.785757 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.785767 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.785777 | orchestrator | 2025-04-13 00:59:41.785787 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-13 00:59:41.785797 | orchestrator | Sunday 13 April 2025 00:58:00 +0000 (0:00:00.338) 0:00:31.021 ********** 2025-04-13 00:59:41.785807 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.785816 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.785831 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.785841 | orchestrator | 2025-04-13 00:59:41.785851 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-13 00:59:41.785861 | orchestrator | Sunday 13 April 2025 00:58:00 +0000 (0:00:00.316) 0:00:31.338 ********** 2025-04-13 00:59:41.785871 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:59:41.785881 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:59:41.785890 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:59:41.785900 | orchestrator | 2025-04-13 00:59:41.785967 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-13 00:59:41.785978 | orchestrator | Sunday 13 April 2025 00:58:01 +0000 (0:00:00.735) 0:00:32.074 ********** 2025-04-13 00:59:41.785988 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:59:41.785999 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:59:41.786009 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:59:41.786060 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.786072 | orchestrator | 2025-04-13 00:59:41.786082 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-13 00:59:41.786092 | orchestrator | Sunday 13 April 2025 00:58:02 +0000 (0:00:00.439) 0:00:32.513 ********** 2025-04-13 00:59:41.786102 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:59:41.786112 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:59:41.786180 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:59:41.786191 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.786201 | orchestrator | 2025-04-13 00:59:41.786218 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-13 00:59:41.786228 | orchestrator | Sunday 13 April 2025 00:58:02 +0000 (0:00:00.400) 0:00:32.914 ********** 2025-04-13 00:59:41.786238 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:59:41.786249 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:59:41.786259 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:59:41.786269 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.786279 | orchestrator | 2025-04-13 00:59:41.786289 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-13 00:59:41.786299 | orchestrator | Sunday 13 April 2025 00:58:02 +0000 (0:00:00.391) 0:00:33.305 ********** 2025-04-13 00:59:41.786309 | orchestrator | ok: [testbed-node-3] 2025-04-13 00:59:41.786318 | orchestrator | ok: [testbed-node-4] 2025-04-13 00:59:41.786326 | orchestrator | ok: [testbed-node-5] 2025-04-13 00:59:41.786335 | orchestrator | 2025-04-13 00:59:41.786343 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-13 00:59:41.786355 | orchestrator | Sunday 13 April 2025 00:58:03 +0000 (0:00:00.446) 0:00:33.752 ********** 2025-04-13 00:59:41.786364 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-13 00:59:41.786372 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-04-13 00:59:41.786381 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-04-13 00:59:41.786389 | orchestrator | 2025-04-13 00:59:41.786398 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-13 00:59:41.786406 | orchestrator | Sunday 13 April 2025 00:58:04 +0000 (0:00:00.978) 0:00:34.730 ********** 2025-04-13 00:59:41.786415 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.786423 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.786431 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.786440 | orchestrator | 2025-04-13 00:59:41.786448 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-13 00:59:41.786457 | orchestrator | Sunday 13 April 2025 00:58:04 +0000 (0:00:00.332) 0:00:35.062 ********** 2025-04-13 00:59:41.786465 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.786474 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.786482 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.786491 | orchestrator | 2025-04-13 00:59:41.786499 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-13 00:59:41.786513 | orchestrator | Sunday 13 April 2025 00:58:04 +0000 (0:00:00.334) 0:00:35.397 ********** 2025-04-13 00:59:41.786522 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-13 00:59:41.786531 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.786540 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-13 00:59:41.786548 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.786557 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-13 00:59:41.786565 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.786574 | orchestrator | 2025-04-13 00:59:41.786582 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-13 00:59:41.786591 | orchestrator | Sunday 13 April 2025 00:58:05 +0000 (0:00:00.664) 0:00:36.062 ********** 2025-04-13 00:59:41.786600 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-13 00:59:41.786608 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.786617 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-13 00:59:41.786626 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.786634 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-13 00:59:41.786643 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.786651 | orchestrator | 2025-04-13 00:59:41.786660 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-13 00:59:41.786673 | orchestrator | Sunday 13 April 2025 00:58:06 +0000 (0:00:00.655) 0:00:36.717 ********** 2025-04-13 00:59:41.786682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-13 00:59:41.786690 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-13 00:59:41.786699 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-13 00:59:41.786708 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-13 00:59:41.786716 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-13 00:59:41.786725 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.786733 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-13 00:59:41.786742 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.786750 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-13 00:59:41.786759 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-13 00:59:41.786767 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-13 00:59:41.786776 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.786784 | orchestrator | 2025-04-13 00:59:41.786793 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-04-13 00:59:41.786801 | orchestrator | Sunday 13 April 2025 00:58:07 +0000 (0:00:00.895) 0:00:37.613 ********** 2025-04-13 00:59:41.786810 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.786818 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.786827 | orchestrator | skipping: [testbed-node-5] 2025-04-13 00:59:41.786836 | orchestrator | 2025-04-13 00:59:41.786844 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-04-13 00:59:41.786853 | orchestrator | Sunday 13 April 2025 00:58:07 +0000 (0:00:00.309) 0:00:37.923 ********** 2025-04-13 00:59:41.786861 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-13 00:59:41.786870 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-13 00:59:41.786878 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-13 00:59:41.786887 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-04-13 00:59:41.786895 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-13 00:59:41.786904 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-13 00:59:41.786912 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-13 00:59:41.786920 | orchestrator | 2025-04-13 00:59:41.786929 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-04-13 00:59:41.786938 | orchestrator | Sunday 13 April 2025 00:58:08 +0000 (0:00:00.990) 0:00:38.914 ********** 2025-04-13 00:59:41.786946 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-13 00:59:41.786955 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-13 00:59:41.786991 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-13 00:59:41.786999 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-04-13 00:59:41.787008 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-13 00:59:41.787017 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-13 00:59:41.787025 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-13 00:59:41.787034 | orchestrator | 2025-04-13 00:59:41.787042 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-04-13 00:59:41.787051 | orchestrator | Sunday 13 April 2025 00:58:10 +0000 (0:00:02.180) 0:00:41.094 ********** 2025-04-13 00:59:41.787060 | orchestrator | skipping: [testbed-node-3] 2025-04-13 00:59:41.787073 | orchestrator | skipping: [testbed-node-4] 2025-04-13 00:59:41.787081 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-04-13 00:59:41.787090 | orchestrator | 2025-04-13 00:59:41.787098 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-04-13 00:59:41.787114 | orchestrator | Sunday 13 April 2025 00:58:11 +0000 (0:00:00.544) 0:00:41.639 ********** 2025-04-13 00:59:41.787141 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-13 00:59:41.787164 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-13 00:59:41.787173 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-13 00:59:41.787182 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-13 00:59:41.787191 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-13 00:59:41.787199 | orchestrator | 2025-04-13 00:59:41.787208 | orchestrator | TASK [generate keys] *********************************************************** 2025-04-13 00:59:41.787217 | orchestrator | Sunday 13 April 2025 00:58:52 +0000 (0:00:41.607) 0:01:23.247 ********** 2025-04-13 00:59:41.787225 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-13 00:59:41.787234 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-13 00:59:41.787242 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-13 00:59:41.787251 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-13 00:59:41.787259 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-13 00:59:41.787268 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-13 00:59:41.787276 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-04-13 00:59:41.787285 | orchestrator | 2025-04-13 00:59:41.787294 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-04-13 00:59:41.787302 | orchestrator | Sunday 13 April 2025 00:59:12 +0000 (0:00:19.965) 0:01:43.212 ********** 2025-04-13 00:59:41.787311 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-13 00:59:41.787319 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-13 00:59:41.787328 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-13 00:59:41.787336 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-13 00:59:41.787345 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-13 00:59:41.787354 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-13 00:59:41.787362 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-13 00:59:41.787376 | orchestrator | 2025-04-13 00:59:41.787385 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-04-13 00:59:41.787393 | orchestrator | Sunday 13 April 2025 00:59:23 +0000 (0:00:10.426) 0:01:53.638 ********** 2025-04-13 00:59:41.787402 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-13 00:59:41.787410 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-13 00:59:41.787419 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-13 00:59:41.787427 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-13 00:59:41.787436 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-13 00:59:41.787444 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-13 00:59:41.787453 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-13 00:59:41.787461 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-13 00:59:41.787470 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-13 00:59:41.787478 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-13 00:59:41.787487 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-13 00:59:41.787500 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-13 00:59:44.826960 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-13 00:59:44.827104 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-13 00:59:44.827170 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-13 00:59:44.827187 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-13 00:59:44.827201 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-13 00:59:44.827215 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-13 00:59:44.827230 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-04-13 00:59:44.827245 | orchestrator | 2025-04-13 00:59:44.827260 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 00:59:44.827276 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-04-13 00:59:44.827292 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-04-13 00:59:44.827307 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-04-13 00:59:44.827320 | orchestrator | 2025-04-13 00:59:44.827335 | orchestrator | 2025-04-13 00:59:44.827348 | orchestrator | 2025-04-13 00:59:44.827362 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 00:59:44.827376 | orchestrator | Sunday 13 April 2025 00:59:41 +0000 (0:00:17.967) 0:02:11.606 ********** 2025-04-13 00:59:44.827390 | orchestrator | =============================================================================== 2025-04-13 00:59:44.827404 | orchestrator | create openstack pool(s) ----------------------------------------------- 41.61s 2025-04-13 00:59:44.827417 | orchestrator | generate keys ---------------------------------------------------------- 19.97s 2025-04-13 00:59:44.827431 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.97s 2025-04-13 00:59:44.827445 | orchestrator | get keys from monitors ------------------------------------------------- 10.43s 2025-04-13 00:59:44.827458 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.43s 2025-04-13 00:59:44.827472 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 2.18s 2025-04-13 00:59:44.827509 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 2.13s 2025-04-13 00:59:44.827525 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.53s 2025-04-13 00:59:44.827541 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.33s 2025-04-13 00:59:44.827556 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 1.13s 2025-04-13 00:59:44.827572 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.99s 2025-04-13 00:59:44.827587 | orchestrator | ceph-facts : set_fact rgw_instances without rgw multisite --------------- 0.98s 2025-04-13 00:59:44.827603 | orchestrator | ceph-facts : set_fact rgw_instances_all --------------------------------- 0.90s 2025-04-13 00:59:44.827619 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.88s 2025-04-13 00:59:44.827638 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.87s 2025-04-13 00:59:44.827662 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.86s 2025-04-13 00:59:44.827684 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.86s 2025-04-13 00:59:44.827708 | orchestrator | ceph-facts : import_tasks set_radosgw_address.yml ----------------------- 0.77s 2025-04-13 00:59:44.827732 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.74s 2025-04-13 00:59:44.827756 | orchestrator | ceph-facts : set_fact _radosgw_address to radosgw_address --------------- 0.74s 2025-04-13 00:59:44.827803 | orchestrator | 2025-04-13 00:59:44 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:59:44.829348 | orchestrator | 2025-04-13 00:59:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:59:44.831278 | orchestrator | 2025-04-13 00:59:44 | INFO  | Task 67612c76-5b47-414d-8287-c9df69c3dc10 is in state STARTED 2025-04-13 00:59:47.880856 | orchestrator | 2025-04-13 00:59:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:59:47.881001 | orchestrator | 2025-04-13 00:59:47 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:59:47.881566 | orchestrator | 2025-04-13 00:59:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:59:47.882716 | orchestrator | 2025-04-13 00:59:47 | INFO  | Task 67612c76-5b47-414d-8287-c9df69c3dc10 is in state STARTED 2025-04-13 00:59:50.931245 | orchestrator | 2025-04-13 00:59:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:59:50.931383 | orchestrator | 2025-04-13 00:59:50 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:59:50.932183 | orchestrator | 2025-04-13 00:59:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:59:50.934197 | orchestrator | 2025-04-13 00:59:50 | INFO  | Task 67612c76-5b47-414d-8287-c9df69c3dc10 is in state STARTED 2025-04-13 00:59:50.935356 | orchestrator | 2025-04-13 00:59:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:59:53.997380 | orchestrator | 2025-04-13 00:59:53 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:59:53.999516 | orchestrator | 2025-04-13 00:59:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:59:54.003661 | orchestrator | 2025-04-13 00:59:54 | INFO  | Task 67612c76-5b47-414d-8287-c9df69c3dc10 is in state STARTED 2025-04-13 00:59:54.005863 | orchestrator | 2025-04-13 00:59:54 | INFO  | Task 50108906-0c81-4902-ab85-a58befe74758 is in state STARTED 2025-04-13 00:59:57.056487 | orchestrator | 2025-04-13 00:59:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 00:59:57.056648 | orchestrator | 2025-04-13 00:59:57 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 00:59:57.059266 | orchestrator | 2025-04-13 00:59:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 00:59:57.061422 | orchestrator | 2025-04-13 00:59:57 | INFO  | Task 67612c76-5b47-414d-8287-c9df69c3dc10 is in state STARTED 2025-04-13 00:59:57.063890 | orchestrator | 2025-04-13 00:59:57 | INFO  | Task 50108906-0c81-4902-ab85-a58befe74758 is in state STARTED 2025-04-13 00:59:57.064332 | orchestrator | 2025-04-13 00:59:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:00:00.108392 | orchestrator | 2025-04-13 01:00:00 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 01:00:00.109421 | orchestrator | 2025-04-13 01:00:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:00:00.113165 | orchestrator | 2025-04-13 01:00:00 | INFO  | Task 67612c76-5b47-414d-8287-c9df69c3dc10 is in state STARTED 2025-04-13 01:00:00.116009 | orchestrator | 2025-04-13 01:00:00 | INFO  | Task 50108906-0c81-4902-ab85-a58befe74758 is in state STARTED 2025-04-13 01:00:00.116416 | orchestrator | 2025-04-13 01:00:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:00:03.166748 | orchestrator | 2025-04-13 01:00:03 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 01:00:03.167598 | orchestrator | 2025-04-13 01:00:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:00:03.168875 | orchestrator | 2025-04-13 01:00:03 | INFO  | Task 67612c76-5b47-414d-8287-c9df69c3dc10 is in state STARTED 2025-04-13 01:00:03.170337 | orchestrator | 2025-04-13 01:00:03 | INFO  | Task 50108906-0c81-4902-ab85-a58befe74758 is in state STARTED 2025-04-13 01:00:06.209028 | orchestrator | 2025-04-13 01:00:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:00:06.209205 | orchestrator | 2025-04-13 01:00:06 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 01:00:06.209509 | orchestrator | 2025-04-13 01:00:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:00:06.210744 | orchestrator | 2025-04-13 01:00:06 | INFO  | Task 67612c76-5b47-414d-8287-c9df69c3dc10 is in state STARTED 2025-04-13 01:00:06.211674 | orchestrator | 2025-04-13 01:00:06 | INFO  | Task 50108906-0c81-4902-ab85-a58befe74758 is in state STARTED 2025-04-13 01:00:09.264321 | orchestrator | 2025-04-13 01:00:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:00:09.264465 | orchestrator | 2025-04-13 01:00:09 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 01:00:09.265336 | orchestrator | 2025-04-13 01:00:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:00:09.267102 | orchestrator | 2025-04-13 01:00:09 | INFO  | Task 67612c76-5b47-414d-8287-c9df69c3dc10 is in state STARTED 2025-04-13 01:00:09.274409 | orchestrator | 2025-04-13 01:00:09 | INFO  | Task 50108906-0c81-4902-ab85-a58befe74758 is in state STARTED 2025-04-13 01:00:12.340097 | orchestrator | 2025-04-13 01:00:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:00:12.340339 | orchestrator | 2025-04-13 01:00:12 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state STARTED 2025-04-13 01:00:12.341027 | orchestrator | 2025-04-13 01:00:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:00:12.341914 | orchestrator | 2025-04-13 01:00:12 | INFO  | Task 67612c76-5b47-414d-8287-c9df69c3dc10 is in state STARTED 2025-04-13 01:00:12.341950 | orchestrator | 2025-04-13 01:00:12 | INFO  | Task 50108906-0c81-4902-ab85-a58befe74758 is in state STARTED 2025-04-13 01:00:15.382303 | orchestrator | 2025-04-13 01:00:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:00:15.382399 | orchestrator | 2025-04-13 01:00:15 | INFO  | Task e2ab8e67-8f61-49ed-a02a-947182c2c4b9 is in state SUCCESS 2025-04-13 01:00:15.383187 | orchestrator | 2025-04-13 01:00:15.383202 | orchestrator | 2025-04-13 01:00:15.383208 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 01:00:15.383214 | orchestrator | 2025-04-13 01:00:15.383220 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 01:00:15.383226 | orchestrator | Sunday 13 April 2025 00:57:44 +0000 (0:00:00.317) 0:00:00.317 ********** 2025-04-13 01:00:15.383232 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:00:15.383238 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:00:15.383244 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:00:15.383249 | orchestrator | 2025-04-13 01:00:15.383266 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 01:00:15.383272 | orchestrator | Sunday 13 April 2025 00:57:44 +0000 (0:00:00.416) 0:00:00.734 ********** 2025-04-13 01:00:15.383276 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-04-13 01:00:15.383282 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-04-13 01:00:15.383286 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-04-13 01:00:15.383291 | orchestrator | 2025-04-13 01:00:15.383296 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-04-13 01:00:15.383301 | orchestrator | 2025-04-13 01:00:15.383308 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-13 01:00:15.383315 | orchestrator | Sunday 13 April 2025 00:57:45 +0000 (0:00:00.303) 0:00:01.037 ********** 2025-04-13 01:00:15.383324 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:00:15.383333 | orchestrator | 2025-04-13 01:00:15.383341 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-04-13 01:00:15.383348 | orchestrator | Sunday 13 April 2025 00:57:46 +0000 (0:00:00.847) 0:00:01.885 ********** 2025-04-13 01:00:15.383359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-13 01:00:15.383370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-13 01:00:15.383445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-13 01:00:15.383459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-13 01:00:15.383468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-13 01:00:15.383477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-13 01:00:15.383487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-13 01:00:15.383497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-13 01:00:15.383512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-13 01:00:15.383523 | orchestrator | 2025-04-13 01:00:15.383529 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-04-13 01:00:15.383538 | orchestrator | Sunday 13 April 2025 00:57:48 +0000 (0:00:02.364) 0:00:04.250 ********** 2025-04-13 01:00:15.383543 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-04-13 01:00:15.383548 | orchestrator | 2025-04-13 01:00:15.383553 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-04-13 01:00:15.383558 | orchestrator | Sunday 13 April 2025 00:57:48 +0000 (0:00:00.556) 0:00:04.806 ********** 2025-04-13 01:00:15.383563 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:00:15.383568 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:00:15.383573 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:00:15.383577 | orchestrator | 2025-04-13 01:00:15.383582 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-04-13 01:00:15.383587 | orchestrator | Sunday 13 April 2025 00:57:49 +0000 (0:00:00.438) 0:00:05.245 ********** 2025-04-13 01:00:15.383592 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-13 01:00:15.383598 | orchestrator | 2025-04-13 01:00:15.383603 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-13 01:00:15.383607 | orchestrator | Sunday 13 April 2025 00:57:49 +0000 (0:00:00.376) 0:00:05.622 ********** 2025-04-13 01:00:15.383620 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:00:15.383625 | orchestrator | 2025-04-13 01:00:15.383630 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-04-13 01:00:15.383635 | orchestrator | Sunday 13 April 2025 00:57:50 +0000 (0:00:00.670) 0:00:06.292 ********** 2025-04-13 01:00:15.383640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-13 01:00:15.383646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-13 01:00:15.383658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-13 01:00:15.383663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-13 01:00:15.383669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-13 01:00:15.383674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-13 01:00:15.383680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-13 01:00:15.383688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-13 01:00:15.383693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-13 01:00:15.383698 | orchestrator | 2025-04-13 01:00:15.383703 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-04-13 01:00:15.383708 | orchestrator | Sunday 13 April 2025 00:57:53 +0000 (0:00:03.248) 0:00:09.541 ********** 2025-04-13 01:00:15.383716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-13 01:00:15.383721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-13 01:00:15.383727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-13 01:00:15.383735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-13 01:00:15.383740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-13 01:00:15.383745 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:00:15.383766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-13 01:00:15.383771 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:15.383776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-13 01:00:15.383782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-13 01:00:15.383791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-13 01:00:15.383796 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:00:15.383800 | orchestrator | 2025-04-13 01:00:15.383805 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-04-13 01:00:15.383810 | orchestrator | Sunday 13 April 2025 00:57:54 +0000 (0:00:01.117) 0:00:10.658 ********** 2025-04-13 01:00:15.383815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-13 01:00:15.383823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-13 01:00:15.383829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-13 01:00:15.383834 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:15.383839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-13 01:00:15.383847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-13 01:00:15.383852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-13 01:00:15.383857 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:00:15.383866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-13 01:00:15.384190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-13 01:00:15.384210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-13 01:00:15.384222 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:00:15.384228 | orchestrator | 2025-04-13 01:00:15.384233 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-04-13 01:00:15.384238 | orchestrator | Sunday 13 April 2025 00:57:56 +0000 (0:00:01.501) 0:00:12.160 ********** 2025-04-13 01:00:15.384244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-13 01:00:15.384251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-13 01:00:15.384263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-13 01:00:15.384272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-13 01:00:15.384278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-13 01:00:15.384284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-13 01:00:15.384289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-13 01:00:15.384295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-13 01:00:15.384303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-13 01:00:15.384309 | orchestrator | 2025-04-13 01:00:15.384314 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-04-13 01:00:15.384336 | orchestrator | Sunday 13 April 2025 00:57:59 +0000 (0:00:03.522) 0:00:15.683 ********** 2025-04-13 01:00:15.384374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-13 01:00:15.384381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-13 01:00:15.384386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-13 01:00:15.384391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-13 01:00:15.384400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-13 01:00:15.384409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-13 01:00:15.384414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-13 01:00:15.384421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-13 01:00:15.384428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-13 01:00:15.384436 | orchestrator | 2025-04-13 01:00:15.384444 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-04-13 01:00:15.384451 | orchestrator | Sunday 13 April 2025 00:58:08 +0000 (0:00:08.574) 0:00:24.257 ********** 2025-04-13 01:00:15.384459 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:00:15.384466 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:00:15.384473 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:00:15.384481 | orchestrator | 2025-04-13 01:00:15.384504 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-04-13 01:00:15.384514 | orchestrator | Sunday 13 April 2025 00:58:11 +0000 (0:00:02.654) 0:00:26.912 ********** 2025-04-13 01:00:15.384522 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:15.384531 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:00:15.384540 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:00:15.384555 | orchestrator | 2025-04-13 01:00:15.384568 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-04-13 01:00:15.384582 | orchestrator | Sunday 13 April 2025 00:58:12 +0000 (0:00:01.153) 0:00:28.066 ********** 2025-04-13 01:00:15.384591 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:15.384600 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:00:15.384610 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:00:15.384619 | orchestrator | 2025-04-13 01:00:15.384630 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-04-13 01:00:15.384639 | orchestrator | Sunday 13 April 2025 00:58:12 +0000 (0:00:00.538) 0:00:28.604 ********** 2025-04-13 01:00:15.384726 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:15.384734 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:00:15.384742 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:00:15.384750 | orchestrator | 2025-04-13 01:00:15.384757 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-04-13 01:00:15.384765 | orchestrator | Sunday 13 April 2025 00:58:13 +0000 (0:00:00.431) 0:00:29.035 ********** 2025-04-13 01:00:15.384774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-13 01:00:15.384784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-13 01:00:15.384792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-13 01:00:15.384802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-13 01:00:15.384819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-13 01:00:15.384825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-13 01:00:15.384830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-13 01:00:15.384835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-13 01:00:15.384840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-13 01:00:15.384848 | orchestrator | 2025-04-13 01:00:15.384852 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-13 01:00:15.384857 | orchestrator | Sunday 13 April 2025 00:58:15 +0000 (0:00:02.707) 0:00:31.743 ********** 2025-04-13 01:00:15.384862 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:15.384867 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:00:15.384872 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:00:15.384877 | orchestrator | 2025-04-13 01:00:15.384882 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-04-13 01:00:15.384887 | orchestrator | Sunday 13 April 2025 00:58:16 +0000 (0:00:00.308) 0:00:32.051 ********** 2025-04-13 01:00:15.384891 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-04-13 01:00:15.384897 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-04-13 01:00:15.384904 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-04-13 01:00:15.384909 | orchestrator | 2025-04-13 01:00:15.384914 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-04-13 01:00:15.384919 | orchestrator | Sunday 13 April 2025 00:58:18 +0000 (0:00:02.037) 0:00:34.088 ********** 2025-04-13 01:00:15.384924 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-13 01:00:15.384929 | orchestrator | 2025-04-13 01:00:15.384933 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-04-13 01:00:15.384938 | orchestrator | Sunday 13 April 2025 00:58:18 +0000 (0:00:00.576) 0:00:34.664 ********** 2025-04-13 01:00:15.384943 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:15.384948 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:00:15.384953 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:00:15.384957 | orchestrator | 2025-04-13 01:00:15.384962 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-04-13 01:00:15.384967 | orchestrator | Sunday 13 April 2025 00:58:20 +0000 (0:00:01.252) 0:00:35.916 ********** 2025-04-13 01:00:15.384972 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-13 01:00:15.384976 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-13 01:00:15.384981 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-13 01:00:15.384986 | orchestrator | 2025-04-13 01:00:15.384991 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-04-13 01:00:15.384995 | orchestrator | Sunday 13 April 2025 00:58:21 +0000 (0:00:01.090) 0:00:37.007 ********** 2025-04-13 01:00:15.385000 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:00:15.385005 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:00:15.385010 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:00:15.385015 | orchestrator | 2025-04-13 01:00:15.385020 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-04-13 01:00:15.385024 | orchestrator | Sunday 13 April 2025 00:58:21 +0000 (0:00:00.379) 0:00:37.386 ********** 2025-04-13 01:00:15.385029 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-04-13 01:00:15.385034 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-04-13 01:00:15.385045 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-04-13 01:00:15.385050 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-04-13 01:00:15.385055 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-04-13 01:00:15.385059 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-04-13 01:00:15.385064 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-04-13 01:00:15.385069 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-04-13 01:00:15.385074 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-04-13 01:00:15.385082 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-04-13 01:00:15.385086 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-04-13 01:00:15.385091 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-04-13 01:00:15.385096 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-04-13 01:00:15.385101 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-04-13 01:00:15.385105 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-04-13 01:00:15.385110 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-13 01:00:15.385130 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-13 01:00:15.385135 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-13 01:00:15.385139 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-13 01:00:15.385144 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-13 01:00:15.385149 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-13 01:00:15.385154 | orchestrator | 2025-04-13 01:00:15.385159 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-04-13 01:00:15.385163 | orchestrator | Sunday 13 April 2025 00:58:32 +0000 (0:00:11.052) 0:00:48.439 ********** 2025-04-13 01:00:15.385168 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-13 01:00:15.385173 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-13 01:00:15.385177 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-13 01:00:15.385182 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-13 01:00:15.385187 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-13 01:00:15.385194 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-13 01:00:15.385199 | orchestrator | 2025-04-13 01:00:15.385206 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-04-13 01:00:15.385211 | orchestrator | Sunday 13 April 2025 00:58:35 +0000 (0:00:03.057) 0:00:51.496 ********** 2025-04-13 01:00:15.385216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-13 01:00:15.385258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-13 01:00:15.385269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-13 01:00:15.385275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-13 01:00:15.385301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-13 01:00:15.385307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-13 01:00:15.385312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-13 01:00:15.385320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-13 01:00:15.385326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-13 01:00:15.385331 | orchestrator | 2025-04-13 01:00:15.385336 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-13 01:00:15.385340 | orchestrator | Sunday 13 April 2025 00:58:38 +0000 (0:00:02.715) 0:00:54.212 ********** 2025-04-13 01:00:15.385345 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:15.385350 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:00:15.385355 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:00:15.385359 | orchestrator | 2025-04-13 01:00:15.385364 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-04-13 01:00:15.385369 | orchestrator | Sunday 13 April 2025 00:58:38 +0000 (0:00:00.307) 0:00:54.519 ********** 2025-04-13 01:00:15.385374 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:00:15.385379 | orchestrator | 2025-04-13 01:00:15.385383 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-04-13 01:00:15.385388 | orchestrator | Sunday 13 April 2025 00:58:41 +0000 (0:00:02.474) 0:00:56.994 ********** 2025-04-13 01:00:15.385393 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:00:15.385398 | orchestrator | 2025-04-13 01:00:15.385402 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-04-13 01:00:15.385407 | orchestrator | Sunday 13 April 2025 00:58:43 +0000 (0:00:02.221) 0:00:59.215 ********** 2025-04-13 01:00:15.385412 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:00:15.385417 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:00:15.385421 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:00:15.385426 | orchestrator | 2025-04-13 01:00:15.385431 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-04-13 01:00:15.385436 | orchestrator | Sunday 13 April 2025 00:58:44 +0000 (0:00:00.913) 0:01:00.129 ********** 2025-04-13 01:00:15.385440 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:00:15.385448 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:00:15.385453 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:00:15.385457 | orchestrator | 2025-04-13 01:00:15.385462 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-04-13 01:00:15.385467 | orchestrator | Sunday 13 April 2025 00:58:44 +0000 (0:00:00.367) 0:01:00.496 ********** 2025-04-13 01:00:15.385472 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:15.385477 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:00:15.385487 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:00:15.385492 | orchestrator | 2025-04-13 01:00:15.385497 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-04-13 01:00:15.385501 | orchestrator | Sunday 13 April 2025 00:58:45 +0000 (0:00:00.542) 0:01:01.039 ********** 2025-04-13 01:00:15.385506 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:00:15.385511 | orchestrator | 2025-04-13 01:00:15.385516 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-04-13 01:00:15.385520 | orchestrator | Sunday 13 April 2025 00:58:58 +0000 (0:00:13.195) 0:01:14.234 ********** 2025-04-13 01:00:15.385525 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:00:15.385530 | orchestrator | 2025-04-13 01:00:15.385535 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-04-13 01:00:15.385539 | orchestrator | Sunday 13 April 2025 00:59:07 +0000 (0:00:09.039) 0:01:23.273 ********** 2025-04-13 01:00:15.385544 | orchestrator | 2025-04-13 01:00:15.385549 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-04-13 01:00:15.385554 | orchestrator | Sunday 13 April 2025 00:59:07 +0000 (0:00:00.056) 0:01:23.330 ********** 2025-04-13 01:00:15.385558 | orchestrator | 2025-04-13 01:00:15.385563 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-04-13 01:00:15.385568 | orchestrator | Sunday 13 April 2025 00:59:07 +0000 (0:00:00.054) 0:01:23.384 ********** 2025-04-13 01:00:15.385573 | orchestrator | 2025-04-13 01:00:15.385577 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-04-13 01:00:15.385582 | orchestrator | Sunday 13 April 2025 00:59:07 +0000 (0:00:00.059) 0:01:23.444 ********** 2025-04-13 01:00:15.385587 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:00:15.385591 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:00:15.385596 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:00:15.385601 | orchestrator | 2025-04-13 01:00:15.385606 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-04-13 01:00:15.385610 | orchestrator | Sunday 13 April 2025 00:59:16 +0000 (0:00:08.883) 0:01:32.327 ********** 2025-04-13 01:00:15.385615 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:00:15.385620 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:00:15.385625 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:00:15.385629 | orchestrator | 2025-04-13 01:00:15.385634 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-04-13 01:00:15.385639 | orchestrator | Sunday 13 April 2025 00:59:25 +0000 (0:00:09.470) 0:01:41.797 ********** 2025-04-13 01:00:15.385643 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:00:15.385648 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:00:15.385653 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:00:15.385658 | orchestrator | 2025-04-13 01:00:15.385662 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-13 01:00:15.385667 | orchestrator | Sunday 13 April 2025 00:59:31 +0000 (0:00:05.264) 0:01:47.061 ********** 2025-04-13 01:00:15.385672 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:00:15.385677 | orchestrator | 2025-04-13 01:00:15.385684 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-04-13 01:00:15.385689 | orchestrator | Sunday 13 April 2025 00:59:32 +0000 (0:00:00.845) 0:01:47.907 ********** 2025-04-13 01:00:15.385693 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:00:15.385698 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:00:15.385703 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:00:15.385708 | orchestrator | 2025-04-13 01:00:15.385712 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-04-13 01:00:15.385717 | orchestrator | Sunday 13 April 2025 00:59:33 +0000 (0:00:01.023) 0:01:48.931 ********** 2025-04-13 01:00:15.385722 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:00:15.385727 | orchestrator | 2025-04-13 01:00:15.385731 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-04-13 01:00:15.385745 | orchestrator | Sunday 13 April 2025 00:59:34 +0000 (0:00:01.533) 0:01:50.464 ********** 2025-04-13 01:00:15.385750 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-04-13 01:00:15.385755 | orchestrator | 2025-04-13 01:00:15.385760 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-04-13 01:00:15.385764 | orchestrator | Sunday 13 April 2025 00:59:43 +0000 (0:00:08.585) 0:01:59.050 ********** 2025-04-13 01:00:15.385769 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-04-13 01:00:15.385774 | orchestrator | 2025-04-13 01:00:15.385779 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-04-13 01:00:15.385784 | orchestrator | Sunday 13 April 2025 01:00:01 +0000 (0:00:18.525) 0:02:17.576 ********** 2025-04-13 01:00:15.385789 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-04-13 01:00:15.385793 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-04-13 01:00:15.385798 | orchestrator | 2025-04-13 01:00:15.385803 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-04-13 01:00:15.385807 | orchestrator | Sunday 13 April 2025 01:00:07 +0000 (0:00:06.214) 0:02:23.790 ********** 2025-04-13 01:00:15.385812 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:15.385817 | orchestrator | 2025-04-13 01:00:15.385822 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-04-13 01:00:15.385826 | orchestrator | Sunday 13 April 2025 01:00:08 +0000 (0:00:00.121) 0:02:23.911 ********** 2025-04-13 01:00:15.385831 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:15.385838 | orchestrator | 2025-04-13 01:00:15.385881 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-04-13 01:00:15.385891 | orchestrator | Sunday 13 April 2025 01:00:08 +0000 (0:00:00.119) 0:02:24.031 ********** 2025-04-13 01:00:15.386531 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:15.386540 | orchestrator | 2025-04-13 01:00:15.386545 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-04-13 01:00:15.386550 | orchestrator | Sunday 13 April 2025 01:00:08 +0000 (0:00:00.119) 0:02:24.150 ********** 2025-04-13 01:00:15.386555 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:15.386560 | orchestrator | 2025-04-13 01:00:15.386565 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-04-13 01:00:15.386570 | orchestrator | Sunday 13 April 2025 01:00:08 +0000 (0:00:00.424) 0:02:24.574 ********** 2025-04-13 01:00:15.386574 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:00:15.386579 | orchestrator | 2025-04-13 01:00:15.386584 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-13 01:00:15.386589 | orchestrator | Sunday 13 April 2025 01:00:12 +0000 (0:00:03.539) 0:02:28.114 ********** 2025-04-13 01:00:15.386594 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:15.386598 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:00:15.386603 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:00:15.386608 | orchestrator | 2025-04-13 01:00:15.386613 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 01:00:15.386618 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-04-13 01:00:15.386624 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-04-13 01:00:15.386629 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-04-13 01:00:15.386634 | orchestrator | 2025-04-13 01:00:15.386639 | orchestrator | 2025-04-13 01:00:15.386644 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 01:00:15.386649 | orchestrator | Sunday 13 April 2025 01:00:12 +0000 (0:00:00.599) 0:02:28.714 ********** 2025-04-13 01:00:15.386658 | orchestrator | =============================================================================== 2025-04-13 01:00:15.386663 | orchestrator | service-ks-register : keystone | Creating services --------------------- 18.53s 2025-04-13 01:00:15.386667 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.20s 2025-04-13 01:00:15.386672 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 11.05s 2025-04-13 01:00:15.386677 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.47s 2025-04-13 01:00:15.386682 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.04s 2025-04-13 01:00:15.386687 | orchestrator | keystone : Restart keystone-ssh container ------------------------------- 8.88s 2025-04-13 01:00:15.386694 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 8.59s 2025-04-13 01:00:15.386700 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 8.57s 2025-04-13 01:00:15.386704 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.21s 2025-04-13 01:00:15.386709 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.26s 2025-04-13 01:00:15.386714 | orchestrator | keystone : Creating default user role ----------------------------------- 3.54s 2025-04-13 01:00:15.386719 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.52s 2025-04-13 01:00:15.386724 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.25s 2025-04-13 01:00:15.386729 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.06s 2025-04-13 01:00:15.386734 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.72s 2025-04-13 01:00:15.386738 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.71s 2025-04-13 01:00:15.386773 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 2.65s 2025-04-13 01:00:15.386778 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.47s 2025-04-13 01:00:15.386783 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.36s 2025-04-13 01:00:15.386788 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.22s 2025-04-13 01:00:15.386793 | orchestrator | 2025-04-13 01:00:15 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:00:15.386798 | orchestrator | 2025-04-13 01:00:15 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:00:15.386803 | orchestrator | 2025-04-13 01:00:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:00:15.386808 | orchestrator | 2025-04-13 01:00:15 | INFO  | Task 67612c76-5b47-414d-8287-c9df69c3dc10 is in state STARTED 2025-04-13 01:00:15.386813 | orchestrator | 2025-04-13 01:00:15 | INFO  | Task 50108906-0c81-4902-ab85-a58befe74758 is in state STARTED 2025-04-13 01:00:15.386818 | orchestrator | 2025-04-13 01:00:15 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:00:15.386825 | orchestrator | 2025-04-13 01:00:15 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:00:18.426972 | orchestrator | 2025-04-13 01:00:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:00:18.427342 | orchestrator | 2025-04-13 01:00:18 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:00:18.427888 | orchestrator | 2025-04-13 01:00:18 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:00:18.427930 | orchestrator | 2025-04-13 01:00:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:00:18.428199 | orchestrator | 2025-04-13 01:00:18 | INFO  | Task 67612c76-5b47-414d-8287-c9df69c3dc10 is in state STARTED 2025-04-13 01:00:18.428855 | orchestrator | 2025-04-13 01:00:18 | INFO  | Task 50108906-0c81-4902-ab85-a58befe74758 is in state STARTED 2025-04-13 01:00:18.429526 | orchestrator | 2025-04-13 01:00:18 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:00:18.430220 | orchestrator | 2025-04-13 01:00:18 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:00:21.472306 | orchestrator | 2025-04-13 01:00:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:00:21.472442 | orchestrator | 2025-04-13 01:00:21 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:00:21.473001 | orchestrator | 2025-04-13 01:00:21 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:00:21.474577 | orchestrator | 2025-04-13 01:00:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:00:21.475329 | orchestrator | 2025-04-13 01:00:21 | INFO  | Task 67612c76-5b47-414d-8287-c9df69c3dc10 is in state STARTED 2025-04-13 01:00:21.477613 | orchestrator | 2025-04-13 01:00:21.477706 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-13 01:00:21.477723 | orchestrator | 2025-04-13 01:00:21.477737 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-04-13 01:00:21.477750 | orchestrator | 2025-04-13 01:00:21.477763 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-04-13 01:00:21.477776 | orchestrator | Sunday 13 April 2025 00:59:53 +0000 (0:00:00.459) 0:00:00.459 ********** 2025-04-13 01:00:21.477789 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-04-13 01:00:21.477802 | orchestrator | 2025-04-13 01:00:21.477815 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-04-13 01:00:21.477827 | orchestrator | Sunday 13 April 2025 00:59:53 +0000 (0:00:00.210) 0:00:00.670 ********** 2025-04-13 01:00:21.477840 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-04-13 01:00:21.477853 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-04-13 01:00:21.477865 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-04-13 01:00:21.477877 | orchestrator | 2025-04-13 01:00:21.477890 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-04-13 01:00:21.477902 | orchestrator | Sunday 13 April 2025 00:59:54 +0000 (0:00:00.880) 0:00:01.551 ********** 2025-04-13 01:00:21.477914 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-04-13 01:00:21.477927 | orchestrator | 2025-04-13 01:00:21.477939 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-04-13 01:00:21.477951 | orchestrator | Sunday 13 April 2025 00:59:54 +0000 (0:00:00.215) 0:00:01.766 ********** 2025-04-13 01:00:21.477963 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:00:21.477976 | orchestrator | 2025-04-13 01:00:21.477989 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-04-13 01:00:21.478001 | orchestrator | Sunday 13 April 2025 00:59:55 +0000 (0:00:00.600) 0:00:02.367 ********** 2025-04-13 01:00:21.478013 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:00:21.478073 | orchestrator | 2025-04-13 01:00:21.478086 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-04-13 01:00:21.478099 | orchestrator | Sunday 13 April 2025 00:59:55 +0000 (0:00:00.138) 0:00:02.505 ********** 2025-04-13 01:00:21.478135 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:00:21.478149 | orchestrator | 2025-04-13 01:00:21.478164 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-04-13 01:00:21.478179 | orchestrator | Sunday 13 April 2025 00:59:56 +0000 (0:00:00.471) 0:00:02.977 ********** 2025-04-13 01:00:21.478193 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:00:21.478209 | orchestrator | 2025-04-13 01:00:21.478237 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-04-13 01:00:21.478273 | orchestrator | Sunday 13 April 2025 00:59:56 +0000 (0:00:00.150) 0:00:03.128 ********** 2025-04-13 01:00:21.478288 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:00:21.478302 | orchestrator | 2025-04-13 01:00:21.478316 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-04-13 01:00:21.478330 | orchestrator | Sunday 13 April 2025 00:59:56 +0000 (0:00:00.144) 0:00:03.273 ********** 2025-04-13 01:00:21.478344 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:00:21.478358 | orchestrator | 2025-04-13 01:00:21.478372 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-04-13 01:00:21.478386 | orchestrator | Sunday 13 April 2025 00:59:56 +0000 (0:00:00.140) 0:00:03.414 ********** 2025-04-13 01:00:21.478400 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.478416 | orchestrator | 2025-04-13 01:00:21.478430 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-04-13 01:00:21.478445 | orchestrator | Sunday 13 April 2025 00:59:56 +0000 (0:00:00.147) 0:00:03.561 ********** 2025-04-13 01:00:21.478458 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:00:21.478471 | orchestrator | 2025-04-13 01:00:21.478483 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-04-13 01:00:21.478495 | orchestrator | Sunday 13 April 2025 00:59:56 +0000 (0:00:00.295) 0:00:03.857 ********** 2025-04-13 01:00:21.478508 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-13 01:00:21.478521 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-13 01:00:21.478533 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-13 01:00:21.478545 | orchestrator | 2025-04-13 01:00:21.478558 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-04-13 01:00:21.478570 | orchestrator | Sunday 13 April 2025 00:59:57 +0000 (0:00:00.714) 0:00:04.571 ********** 2025-04-13 01:00:21.478582 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:00:21.478595 | orchestrator | 2025-04-13 01:00:21.478607 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-04-13 01:00:21.478620 | orchestrator | Sunday 13 April 2025 00:59:57 +0000 (0:00:00.248) 0:00:04.819 ********** 2025-04-13 01:00:21.478632 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-04-13 01:00:21.478645 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-13 01:00:21.478657 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-13 01:00:21.478670 | orchestrator | 2025-04-13 01:00:21.478682 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-04-13 01:00:21.478694 | orchestrator | Sunday 13 April 2025 00:59:59 +0000 (0:00:01.978) 0:00:06.798 ********** 2025-04-13 01:00:21.478706 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-13 01:00:21.478718 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-13 01:00:21.478731 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-13 01:00:21.478743 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.478756 | orchestrator | 2025-04-13 01:00:21.478769 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-04-13 01:00:21.478795 | orchestrator | Sunday 13 April 2025 01:00:00 +0000 (0:00:00.418) 0:00:07.217 ********** 2025-04-13 01:00:21.478813 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-04-13 01:00:21.478829 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-04-13 01:00:21.478841 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-04-13 01:00:21.478860 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.478873 | orchestrator | 2025-04-13 01:00:21.478886 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-04-13 01:00:21.478898 | orchestrator | Sunday 13 April 2025 01:00:01 +0000 (0:00:00.787) 0:00:08.004 ********** 2025-04-13 01:00:21.478912 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-13 01:00:21.478926 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-13 01:00:21.478939 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-13 01:00:21.478952 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.478965 | orchestrator | 2025-04-13 01:00:21.478977 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-04-13 01:00:21.478989 | orchestrator | Sunday 13 April 2025 01:00:01 +0000 (0:00:00.175) 0:00:08.180 ********** 2025-04-13 01:00:21.479204 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '181935c7d3e1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-04-13 00:59:58.601431', 'end': '2025-04-13 00:59:58.647910', 'delta': '0:00:00.046479', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['181935c7d3e1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-04-13 01:00:21.479268 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '179a905db4fc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-04-13 00:59:59.166653', 'end': '2025-04-13 00:59:59.203521', 'delta': '0:00:00.036868', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['179a905db4fc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-04-13 01:00:21.479293 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '6fda53730048', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-04-13 00:59:59.757309', 'end': '2025-04-13 00:59:59.785114', 'delta': '0:00:00.027805', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6fda53730048'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-04-13 01:00:21.479317 | orchestrator | 2025-04-13 01:00:21.479330 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-04-13 01:00:21.479343 | orchestrator | Sunday 13 April 2025 01:00:01 +0000 (0:00:00.202) 0:00:08.383 ********** 2025-04-13 01:00:21.479355 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:00:21.479369 | orchestrator | 2025-04-13 01:00:21.479381 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-04-13 01:00:21.479393 | orchestrator | Sunday 13 April 2025 01:00:01 +0000 (0:00:00.261) 0:00:08.644 ********** 2025-04-13 01:00:21.479406 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-04-13 01:00:21.479418 | orchestrator | 2025-04-13 01:00:21.479431 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-04-13 01:00:21.479444 | orchestrator | Sunday 13 April 2025 01:00:03 +0000 (0:00:01.473) 0:00:10.117 ********** 2025-04-13 01:00:21.479456 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.479468 | orchestrator | 2025-04-13 01:00:21.479487 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-04-13 01:00:21.479500 | orchestrator | Sunday 13 April 2025 01:00:03 +0000 (0:00:00.132) 0:00:10.250 ********** 2025-04-13 01:00:21.479512 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.479524 | orchestrator | 2025-04-13 01:00:21.479537 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-13 01:00:21.479549 | orchestrator | Sunday 13 April 2025 01:00:03 +0000 (0:00:00.232) 0:00:10.482 ********** 2025-04-13 01:00:21.479561 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.479574 | orchestrator | 2025-04-13 01:00:21.479586 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-04-13 01:00:21.479598 | orchestrator | Sunday 13 April 2025 01:00:03 +0000 (0:00:00.117) 0:00:10.599 ********** 2025-04-13 01:00:21.479610 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:00:21.479623 | orchestrator | 2025-04-13 01:00:21.479635 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-04-13 01:00:21.479648 | orchestrator | Sunday 13 April 2025 01:00:03 +0000 (0:00:00.129) 0:00:10.729 ********** 2025-04-13 01:00:21.479660 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.479672 | orchestrator | 2025-04-13 01:00:21.479684 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-13 01:00:21.479697 | orchestrator | Sunday 13 April 2025 01:00:04 +0000 (0:00:00.225) 0:00:10.955 ********** 2025-04-13 01:00:21.479709 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.479722 | orchestrator | 2025-04-13 01:00:21.479734 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-04-13 01:00:21.479746 | orchestrator | Sunday 13 April 2025 01:00:04 +0000 (0:00:00.124) 0:00:11.079 ********** 2025-04-13 01:00:21.479759 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.479771 | orchestrator | 2025-04-13 01:00:21.479783 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-04-13 01:00:21.479796 | orchestrator | Sunday 13 April 2025 01:00:04 +0000 (0:00:00.139) 0:00:11.219 ********** 2025-04-13 01:00:21.479808 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.479821 | orchestrator | 2025-04-13 01:00:21.479836 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-04-13 01:00:21.479850 | orchestrator | Sunday 13 April 2025 01:00:04 +0000 (0:00:00.124) 0:00:11.344 ********** 2025-04-13 01:00:21.479864 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.479877 | orchestrator | 2025-04-13 01:00:21.479891 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-04-13 01:00:21.479905 | orchestrator | Sunday 13 April 2025 01:00:04 +0000 (0:00:00.131) 0:00:11.475 ********** 2025-04-13 01:00:21.479920 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.479939 | orchestrator | 2025-04-13 01:00:21.479954 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-04-13 01:00:21.479968 | orchestrator | Sunday 13 April 2025 01:00:04 +0000 (0:00:00.333) 0:00:11.808 ********** 2025-04-13 01:00:21.479982 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.479996 | orchestrator | 2025-04-13 01:00:21.480010 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-04-13 01:00:21.480024 | orchestrator | Sunday 13 April 2025 01:00:05 +0000 (0:00:00.130) 0:00:11.939 ********** 2025-04-13 01:00:21.480038 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.480052 | orchestrator | 2025-04-13 01:00:21.480065 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-04-13 01:00:21.480079 | orchestrator | Sunday 13 April 2025 01:00:05 +0000 (0:00:00.132) 0:00:12.071 ********** 2025-04-13 01:00:21.480093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 01:00:21.480132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 01:00:21.480148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 01:00:21.480163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 01:00:21.480178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 01:00:21.480197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 01:00:21.480211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 01:00:21.480231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-13 01:00:21.480255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe299df1-123f-45eb-a46f-1bc77e9ea0d1', 'scsi-SQEMU_QEMU_HARDDISK_fe299df1-123f-45eb-a46f-1bc77e9ea0d1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe299df1-123f-45eb-a46f-1bc77e9ea0d1-part1', 'scsi-SQEMU_QEMU_HARDDISK_fe299df1-123f-45eb-a46f-1bc77e9ea0d1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe299df1-123f-45eb-a46f-1bc77e9ea0d1-part14', 'scsi-SQEMU_QEMU_HARDDISK_fe299df1-123f-45eb-a46f-1bc77e9ea0d1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe299df1-123f-45eb-a46f-1bc77e9ea0d1-part15', 'scsi-SQEMU_QEMU_HARDDISK_fe299df1-123f-45eb-a46f-1bc77e9ea0d1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe299df1-123f-45eb-a46f-1bc77e9ea0d1-part16', 'scsi-SQEMU_QEMU_HARDDISK_fe299df1-123f-45eb-a46f-1bc77e9ea0d1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 01:00:21.480272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5c76205-09bb-4a16-ab8f-39ffb03c9143', 'scsi-SQEMU_QEMU_HARDDISK_f5c76205-09bb-4a16-ab8f-39ffb03c9143'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 01:00:21.480287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95b24700-cfbe-4d9d-a7ca-ca6e4d2b6d43', 'scsi-SQEMU_QEMU_HARDDISK_95b24700-cfbe-4d9d-a7ca-ca6e4d2b6d43'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 01:00:21.480300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b430468-eb80-4fc4-b9b2-ed2873d86014', 'scsi-SQEMU_QEMU_HARDDISK_9b430468-eb80-4fc4-b9b2-ed2873d86014'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 01:00:21.480321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-13-00-02-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-13 01:00:21.480336 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.480348 | orchestrator | 2025-04-13 01:00:21.480361 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-04-13 01:00:21.480373 | orchestrator | Sunday 13 April 2025 01:00:05 +0000 (0:00:00.280) 0:00:12.352 ********** 2025-04-13 01:00:21.480386 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.480398 | orchestrator | 2025-04-13 01:00:21.480411 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-04-13 01:00:21.480423 | orchestrator | Sunday 13 April 2025 01:00:05 +0000 (0:00:00.252) 0:00:12.605 ********** 2025-04-13 01:00:21.480435 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.480448 | orchestrator | 2025-04-13 01:00:21.480460 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-04-13 01:00:21.480472 | orchestrator | Sunday 13 April 2025 01:00:05 +0000 (0:00:00.130) 0:00:12.735 ********** 2025-04-13 01:00:21.480484 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.480496 | orchestrator | 2025-04-13 01:00:21.480509 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-04-13 01:00:21.480521 | orchestrator | Sunday 13 April 2025 01:00:05 +0000 (0:00:00.122) 0:00:12.858 ********** 2025-04-13 01:00:21.480543 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:00:21.480556 | orchestrator | 2025-04-13 01:00:21.480569 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-04-13 01:00:21.480581 | orchestrator | Sunday 13 April 2025 01:00:06 +0000 (0:00:00.526) 0:00:13.385 ********** 2025-04-13 01:00:21.480593 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:00:21.480605 | orchestrator | 2025-04-13 01:00:21.480618 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-13 01:00:21.480630 | orchestrator | Sunday 13 April 2025 01:00:06 +0000 (0:00:00.126) 0:00:13.512 ********** 2025-04-13 01:00:21.480642 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:00:21.480655 | orchestrator | 2025-04-13 01:00:21.480667 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-13 01:00:21.480680 | orchestrator | Sunday 13 April 2025 01:00:07 +0000 (0:00:00.474) 0:00:13.986 ********** 2025-04-13 01:00:21.480692 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:00:21.480704 | orchestrator | 2025-04-13 01:00:21.480717 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-13 01:00:21.480729 | orchestrator | Sunday 13 April 2025 01:00:07 +0000 (0:00:00.355) 0:00:14.342 ********** 2025-04-13 01:00:21.480741 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.480754 | orchestrator | 2025-04-13 01:00:21.480766 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-13 01:00:21.480779 | orchestrator | Sunday 13 April 2025 01:00:07 +0000 (0:00:00.243) 0:00:14.585 ********** 2025-04-13 01:00:21.480791 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.480804 | orchestrator | 2025-04-13 01:00:21.480816 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-04-13 01:00:21.480828 | orchestrator | Sunday 13 April 2025 01:00:07 +0000 (0:00:00.181) 0:00:14.767 ********** 2025-04-13 01:00:21.480847 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-13 01:00:21.480860 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-13 01:00:21.480873 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-13 01:00:21.480885 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.480897 | orchestrator | 2025-04-13 01:00:21.480910 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-04-13 01:00:21.480922 | orchestrator | Sunday 13 April 2025 01:00:08 +0000 (0:00:00.447) 0:00:15.215 ********** 2025-04-13 01:00:21.480935 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-13 01:00:21.480948 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-13 01:00:21.480960 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-13 01:00:21.480973 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.480986 | orchestrator | 2025-04-13 01:00:21.480998 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-04-13 01:00:21.481011 | orchestrator | Sunday 13 April 2025 01:00:08 +0000 (0:00:00.475) 0:00:15.691 ********** 2025-04-13 01:00:21.481024 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-13 01:00:21.481037 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-04-13 01:00:21.481049 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-04-13 01:00:21.481061 | orchestrator | 2025-04-13 01:00:21.481074 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-04-13 01:00:21.481086 | orchestrator | Sunday 13 April 2025 01:00:09 +0000 (0:00:01.184) 0:00:16.875 ********** 2025-04-13 01:00:21.481098 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-13 01:00:21.481111 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-13 01:00:21.481150 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-13 01:00:21.481162 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.481175 | orchestrator | 2025-04-13 01:00:21.481188 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-04-13 01:00:21.481200 | orchestrator | Sunday 13 April 2025 01:00:10 +0000 (0:00:00.218) 0:00:17.094 ********** 2025-04-13 01:00:21.481212 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-13 01:00:21.481225 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-13 01:00:21.481237 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-13 01:00:21.481250 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.481263 | orchestrator | 2025-04-13 01:00:21.481275 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-04-13 01:00:21.481288 | orchestrator | Sunday 13 April 2025 01:00:10 +0000 (0:00:00.246) 0:00:17.340 ********** 2025-04-13 01:00:21.481300 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-04-13 01:00:21.481312 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-13 01:00:21.481325 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-13 01:00:21.481338 | orchestrator | 2025-04-13 01:00:21.481350 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-04-13 01:00:21.481363 | orchestrator | Sunday 13 April 2025 01:00:10 +0000 (0:00:00.191) 0:00:17.532 ********** 2025-04-13 01:00:21.481376 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.481388 | orchestrator | 2025-04-13 01:00:21.481401 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-04-13 01:00:21.481414 | orchestrator | Sunday 13 April 2025 01:00:10 +0000 (0:00:00.128) 0:00:17.660 ********** 2025-04-13 01:00:21.481426 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:00:21.481439 | orchestrator | 2025-04-13 01:00:21.481452 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-04-13 01:00:21.481475 | orchestrator | Sunday 13 April 2025 01:00:11 +0000 (0:00:00.320) 0:00:17.980 ********** 2025-04-13 01:00:21.481488 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-13 01:00:21.481507 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-13 01:00:21.481520 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-13 01:00:21.481532 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-04-13 01:00:21.481550 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-13 01:00:21.481563 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-13 01:00:21.481575 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-13 01:00:21.481587 | orchestrator | 2025-04-13 01:00:21.481600 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-04-13 01:00:21.481612 | orchestrator | Sunday 13 April 2025 01:00:11 +0000 (0:00:00.855) 0:00:18.836 ********** 2025-04-13 01:00:21.481625 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-13 01:00:21.481638 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-13 01:00:21.481650 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-13 01:00:21.481663 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-04-13 01:00:21.481675 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-13 01:00:21.481687 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-13 01:00:21.481700 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-13 01:00:21.481712 | orchestrator | 2025-04-13 01:00:21.481725 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-04-13 01:00:21.481737 | orchestrator | Sunday 13 April 2025 01:00:13 +0000 (0:00:01.623) 0:00:20.459 ********** 2025-04-13 01:00:21.481749 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:00:21.481762 | orchestrator | 2025-04-13 01:00:21.481775 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-04-13 01:00:21.481787 | orchestrator | Sunday 13 April 2025 01:00:14 +0000 (0:00:00.472) 0:00:20.932 ********** 2025-04-13 01:00:21.481799 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-13 01:00:21.481812 | orchestrator | 2025-04-13 01:00:21.481824 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-04-13 01:00:21.481837 | orchestrator | Sunday 13 April 2025 01:00:14 +0000 (0:00:00.674) 0:00:21.606 ********** 2025-04-13 01:00:21.481849 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-04-13 01:00:21.481862 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-04-13 01:00:21.481874 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-04-13 01:00:21.481886 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-04-13 01:00:21.481899 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-04-13 01:00:21.481911 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-04-13 01:00:21.481923 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-04-13 01:00:21.481935 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-04-13 01:00:21.481947 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-04-13 01:00:21.481960 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-04-13 01:00:21.481979 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-04-13 01:00:21.481991 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-04-13 01:00:21.482004 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-04-13 01:00:21.482045 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-04-13 01:00:21.482061 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-04-13 01:00:21.482073 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-04-13 01:00:21.482090 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-04-13 01:00:21.482103 | orchestrator | 2025-04-13 01:00:21.482133 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 01:00:21.482148 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-04-13 01:00:21.482161 | orchestrator | 2025-04-13 01:00:21.482179 | orchestrator | 2025-04-13 01:00:21.482191 | orchestrator | 2025-04-13 01:00:21.482204 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 01:00:21.482216 | orchestrator | Sunday 13 April 2025 01:00:20 +0000 (0:00:05.867) 0:00:27.474 ********** 2025-04-13 01:00:21.482229 | orchestrator | =============================================================================== 2025-04-13 01:00:21.482241 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 5.87s 2025-04-13 01:00:21.482254 | orchestrator | ceph-facts : find a running mon container ------------------------------- 1.98s 2025-04-13 01:00:21.482266 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.62s 2025-04-13 01:00:21.482284 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.47s 2025-04-13 01:00:24.521968 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.18s 2025-04-13 01:00:24.522194 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.88s 2025-04-13 01:00:24.522219 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.86s 2025-04-13 01:00:24.522234 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.79s 2025-04-13 01:00:24.522248 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.71s 2025-04-13 01:00:24.522262 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.67s 2025-04-13 01:00:24.522276 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.60s 2025-04-13 01:00:24.522290 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.53s 2025-04-13 01:00:24.522304 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.48s 2025-04-13 01:00:24.522317 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.47s 2025-04-13 01:00:24.522331 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.47s 2025-04-13 01:00:24.522344 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.47s 2025-04-13 01:00:24.522358 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.45s 2025-04-13 01:00:24.522371 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.42s 2025-04-13 01:00:24.522385 | orchestrator | ceph-facts : set osd_pool_default_crush_rule fact ----------------------- 0.36s 2025-04-13 01:00:24.522399 | orchestrator | ceph-facts : set_fact build dedicated_devices from resolved symlinks ---- 0.33s 2025-04-13 01:00:24.522413 | orchestrator | 2025-04-13 01:00:21 | INFO  | Task 50108906-0c81-4902-ab85-a58befe74758 is in state SUCCESS 2025-04-13 01:00:24.522428 | orchestrator | 2025-04-13 01:00:21 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:00:24.522442 | orchestrator | 2025-04-13 01:00:21 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:00:24.522484 | orchestrator | 2025-04-13 01:00:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:00:24.522521 | orchestrator | 2025-04-13 01:00:24 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:00:24.523469 | orchestrator | 2025-04-13 01:00:24 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:00:24.525756 | orchestrator | 2025-04-13 01:00:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:00:24.527007 | orchestrator | 2025-04-13 01:00:24 | INFO  | Task 67612c76-5b47-414d-8287-c9df69c3dc10 is in state SUCCESS 2025-04-13 01:00:24.528354 | orchestrator | 2025-04-13 01:00:24 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:00:24.529611 | orchestrator | 2025-04-13 01:00:24 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:00:27.580917 | orchestrator | 2025-04-13 01:00:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:00:27.581060 | orchestrator | 2025-04-13 01:00:27 | INFO  | Task b48e6a9a-8c07-4fd3-9781-e7f429851087 is in state STARTED 2025-04-13 01:00:27.582694 | orchestrator | 2025-04-13 01:00:27 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:00:27.583850 | orchestrator | 2025-04-13 01:00:27 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:00:27.585037 | orchestrator | 2025-04-13 01:00:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:00:27.585964 | orchestrator | 2025-04-13 01:00:27 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:00:27.587360 | orchestrator | 2025-04-13 01:00:27 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:00:30.645404 | orchestrator | 2025-04-13 01:00:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:00:30.645545 | orchestrator | 2025-04-13 01:00:30 | INFO  | Task b48e6a9a-8c07-4fd3-9781-e7f429851087 is in state STARTED 2025-04-13 01:00:30.648030 | orchestrator | 2025-04-13 01:00:30 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:00:30.649449 | orchestrator | 2025-04-13 01:00:30 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:00:30.650761 | orchestrator | 2025-04-13 01:00:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:00:30.651989 | orchestrator | 2025-04-13 01:00:30 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:00:30.653409 | orchestrator | 2025-04-13 01:00:30 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:00:33.707399 | orchestrator | 2025-04-13 01:00:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:00:33.707580 | orchestrator | 2025-04-13 01:00:33 | INFO  | Task b48e6a9a-8c07-4fd3-9781-e7f429851087 is in state STARTED 2025-04-13 01:00:33.710265 | orchestrator | 2025-04-13 01:00:33 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:00:33.712907 | orchestrator | 2025-04-13 01:00:33 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:00:33.714810 | orchestrator | 2025-04-13 01:00:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:00:33.715916 | orchestrator | 2025-04-13 01:00:33 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:00:33.717224 | orchestrator | 2025-04-13 01:00:33 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:00:36.763537 | orchestrator | 2025-04-13 01:00:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:00:36.763714 | orchestrator | 2025-04-13 01:00:36 | INFO  | Task b48e6a9a-8c07-4fd3-9781-e7f429851087 is in state STARTED 2025-04-13 01:00:36.766275 | orchestrator | 2025-04-13 01:00:36 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:00:36.768994 | orchestrator | 2025-04-13 01:00:36 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:00:36.771503 | orchestrator | 2025-04-13 01:00:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:00:36.774328 | orchestrator | 2025-04-13 01:00:36 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:00:36.775170 | orchestrator | 2025-04-13 01:00:36 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:00:36.775362 | orchestrator | 2025-04-13 01:00:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:00:39.821030 | orchestrator | 2025-04-13 01:00:39 | INFO  | Task b48e6a9a-8c07-4fd3-9781-e7f429851087 is in state STARTED 2025-04-13 01:00:39.821837 | orchestrator | 2025-04-13 01:00:39 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:00:39.822780 | orchestrator | 2025-04-13 01:00:39 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:00:39.824317 | orchestrator | 2025-04-13 01:00:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:00:39.828408 | orchestrator | 2025-04-13 01:00:39 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:00:39.829109 | orchestrator | 2025-04-13 01:00:39 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:00:42.872357 | orchestrator | 2025-04-13 01:00:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:00:42.872489 | orchestrator | 2025-04-13 01:00:42 | INFO  | Task b48e6a9a-8c07-4fd3-9781-e7f429851087 is in state STARTED 2025-04-13 01:00:42.873819 | orchestrator | 2025-04-13 01:00:42 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:00:42.876445 | orchestrator | 2025-04-13 01:00:42 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:00:42.876944 | orchestrator | 2025-04-13 01:00:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:00:42.878498 | orchestrator | 2025-04-13 01:00:42 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:00:42.879846 | orchestrator | 2025-04-13 01:00:42 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:00:42.879975 | orchestrator | 2025-04-13 01:00:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:00:45.925974 | orchestrator | 2025-04-13 01:00:45 | INFO  | Task b48e6a9a-8c07-4fd3-9781-e7f429851087 is in state STARTED 2025-04-13 01:00:45.926901 | orchestrator | 2025-04-13 01:00:45 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:00:45.928477 | orchestrator | 2025-04-13 01:00:45 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:00:45.929866 | orchestrator | 2025-04-13 01:00:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:00:45.932553 | orchestrator | 2025-04-13 01:00:45 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:00:45.934291 | orchestrator | 2025-04-13 01:00:45 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:00:48.981579 | orchestrator | 2025-04-13 01:00:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:00:48.981728 | orchestrator | 2025-04-13 01:00:48 | INFO  | Task b48e6a9a-8c07-4fd3-9781-e7f429851087 is in state STARTED 2025-04-13 01:00:48.982559 | orchestrator | 2025-04-13 01:00:48 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:00:48.985428 | orchestrator | 2025-04-13 01:00:48 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:00:48.987222 | orchestrator | 2025-04-13 01:00:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:00:48.988142 | orchestrator | 2025-04-13 01:00:48 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:00:48.990734 | orchestrator | 2025-04-13 01:00:48 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:00:52.033149 | orchestrator | 2025-04-13 01:00:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:00:52.033294 | orchestrator | 2025-04-13 01:00:52 | INFO  | Task b48e6a9a-8c07-4fd3-9781-e7f429851087 is in state STARTED 2025-04-13 01:00:52.034241 | orchestrator | 2025-04-13 01:00:52 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:00:52.035595 | orchestrator | 2025-04-13 01:00:52 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:00:52.036774 | orchestrator | 2025-04-13 01:00:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:00:52.041674 | orchestrator | 2025-04-13 01:00:52 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:00:52.043570 | orchestrator | 2025-04-13 01:00:52 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:00:55.088578 | orchestrator | 2025-04-13 01:00:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:00:55.088689 | orchestrator | 2025-04-13 01:00:55 | INFO  | Task b48e6a9a-8c07-4fd3-9781-e7f429851087 is in state STARTED 2025-04-13 01:00:55.089838 | orchestrator | 2025-04-13 01:00:55 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:00:55.090420 | orchestrator | 2025-04-13 01:00:55 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:00:55.091486 | orchestrator | 2025-04-13 01:00:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:00:55.094582 | orchestrator | 2025-04-13 01:00:55 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:00:55.097790 | orchestrator | 2025-04-13 01:00:55 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:00:58.149953 | orchestrator | 2025-04-13 01:00:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:00:58.150230 | orchestrator | 2025-04-13 01:00:58 | INFO  | Task b48e6a9a-8c07-4fd3-9781-e7f429851087 is in state STARTED 2025-04-13 01:00:58.150635 | orchestrator | 2025-04-13 01:00:58 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:00:58.151350 | orchestrator | 2025-04-13 01:00:58 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:00:58.152231 | orchestrator | 2025-04-13 01:00:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:00:58.152872 | orchestrator | 2025-04-13 01:00:58 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:00:58.153469 | orchestrator | 2025-04-13 01:00:58 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:01:01.198857 | orchestrator | 2025-04-13 01:00:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:01:01.199022 | orchestrator | 2025-04-13 01:01:01 | INFO  | Task b48e6a9a-8c07-4fd3-9781-e7f429851087 is in state STARTED 2025-04-13 01:01:01.199637 | orchestrator | 2025-04-13 01:01:01 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:01:01.200664 | orchestrator | 2025-04-13 01:01:01 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:01:01.201613 | orchestrator | 2025-04-13 01:01:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:01:01.203573 | orchestrator | 2025-04-13 01:01:01 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:01:01.204721 | orchestrator | 2025-04-13 01:01:01 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:01:04.244832 | orchestrator | 2025-04-13 01:01:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:01:04.245094 | orchestrator | 2025-04-13 01:01:04 | INFO  | Task b48e6a9a-8c07-4fd3-9781-e7f429851087 is in state STARTED 2025-04-13 01:01:04.245169 | orchestrator | 2025-04-13 01:01:04 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:01:04.249843 | orchestrator | 2025-04-13 01:01:04 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:01:04.250148 | orchestrator | 2025-04-13 01:01:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:01:04.250934 | orchestrator | 2025-04-13 01:01:04 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:01:04.251602 | orchestrator | 2025-04-13 01:01:04 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:01:04.252527 | orchestrator | 2025-04-13 01:01:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:01:07.296551 | orchestrator | 2025-04-13 01:01:07 | INFO  | Task b48e6a9a-8c07-4fd3-9781-e7f429851087 is in state STARTED 2025-04-13 01:01:07.297049 | orchestrator | 2025-04-13 01:01:07 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:01:07.298342 | orchestrator | 2025-04-13 01:01:07 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:01:07.299403 | orchestrator | 2025-04-13 01:01:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:01:07.300002 | orchestrator | 2025-04-13 01:01:07 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:01:07.300678 | orchestrator | 2025-04-13 01:01:07 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:01:10.343104 | orchestrator | 2025-04-13 01:01:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:01:10.343265 | orchestrator | 2025-04-13 01:01:10 | INFO  | Task b48e6a9a-8c07-4fd3-9781-e7f429851087 is in state STARTED 2025-04-13 01:01:10.343487 | orchestrator | 2025-04-13 01:01:10 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:01:10.345439 | orchestrator | 2025-04-13 01:01:10 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:01:10.345921 | orchestrator | 2025-04-13 01:01:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:01:10.346632 | orchestrator | 2025-04-13 01:01:10 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:01:10.347561 | orchestrator | 2025-04-13 01:01:10 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:01:13.392158 | orchestrator | 2025-04-13 01:01:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:01:13.392495 | orchestrator | 2025-04-13 01:01:13 | INFO  | Task b48e6a9a-8c07-4fd3-9781-e7f429851087 is in state STARTED 2025-04-13 01:01:13.393280 | orchestrator | 2025-04-13 01:01:13 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:01:13.393359 | orchestrator | 2025-04-13 01:01:13 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:01:13.393800 | orchestrator | 2025-04-13 01:01:13 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:01:13.395503 | orchestrator | 2025-04-13 01:01:13 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:01:13.396003 | orchestrator | 2025-04-13 01:01:13 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:01:13.396282 | orchestrator | 2025-04-13 01:01:13 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:01:16.448679 | orchestrator | 2025-04-13 01:01:16 | INFO  | Task b48e6a9a-8c07-4fd3-9781-e7f429851087 is in state STARTED 2025-04-13 01:01:16.449457 | orchestrator | 2025-04-13 01:01:16 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:01:16.450253 | orchestrator | 2025-04-13 01:01:16 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:01:16.451450 | orchestrator | 2025-04-13 01:01:16 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:01:16.451826 | orchestrator | 2025-04-13 01:01:16 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:01:16.454289 | orchestrator | 2025-04-13 01:01:16 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:01:19.511848 | orchestrator | 2025-04-13 01:01:16 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:01:19.511987 | orchestrator | 2025-04-13 01:01:19 | INFO  | Task b48e6a9a-8c07-4fd3-9781-e7f429851087 is in state STARTED 2025-04-13 01:01:19.513104 | orchestrator | 2025-04-13 01:01:19 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:01:19.519061 | orchestrator | 2025-04-13 01:01:19 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:01:19.519743 | orchestrator | 2025-04-13 01:01:19 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:01:19.521044 | orchestrator | 2025-04-13 01:01:19 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:01:19.521515 | orchestrator | 2025-04-13 01:01:19 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:01:22.566650 | orchestrator | 2025-04-13 01:01:19 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:01:22.566785 | orchestrator | 2025-04-13 01:01:22.566804 | orchestrator | 2025-04-13 01:01:22.566818 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-04-13 01:01:22.566831 | orchestrator | 2025-04-13 01:01:22.566844 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-04-13 01:01:22.566856 | orchestrator | Sunday 13 April 2025 00:59:44 +0000 (0:00:00.144) 0:00:00.144 ********** 2025-04-13 01:01:22.566869 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-04-13 01:01:22.566882 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-13 01:01:22.566894 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-13 01:01:22.566906 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-04-13 01:01:22.567040 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-13 01:01:22.567073 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-04-13 01:01:22.567086 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-04-13 01:01:22.567098 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-04-13 01:01:22.567146 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-04-13 01:01:22.567160 | orchestrator | 2025-04-13 01:01:22.567172 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-04-13 01:01:22.567184 | orchestrator | Sunday 13 April 2025 00:59:47 +0000 (0:00:03.006) 0:00:03.150 ********** 2025-04-13 01:01:22.567197 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-04-13 01:01:22.567209 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-13 01:01:22.567224 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-13 01:01:22.567236 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-04-13 01:01:22.567249 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-13 01:01:22.567262 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-04-13 01:01:22.567275 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-04-13 01:01:22.567287 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-04-13 01:01:22.567299 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-04-13 01:01:22.567311 | orchestrator | 2025-04-13 01:01:22.567324 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-04-13 01:01:22.567336 | orchestrator | Sunday 13 April 2025 00:59:47 +0000 (0:00:00.235) 0:00:03.386 ********** 2025-04-13 01:01:22.567348 | orchestrator | ok: [testbed-manager] => { 2025-04-13 01:01:22.567478 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-04-13 01:01:22.567503 | orchestrator | } 2025-04-13 01:01:22.567518 | orchestrator | 2025-04-13 01:01:22.567532 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-04-13 01:01:22.567546 | orchestrator | Sunday 13 April 2025 00:59:48 +0000 (0:00:00.160) 0:00:03.547 ********** 2025-04-13 01:01:22.567560 | orchestrator | changed: [testbed-manager] 2025-04-13 01:01:22.567581 | orchestrator | 2025-04-13 01:01:22.567595 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-04-13 01:01:22.567609 | orchestrator | Sunday 13 April 2025 01:00:21 +0000 (0:00:32.931) 0:00:36.478 ********** 2025-04-13 01:01:22.567623 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-04-13 01:01:22.567637 | orchestrator | 2025-04-13 01:01:22.567651 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-04-13 01:01:22.567665 | orchestrator | Sunday 13 April 2025 01:00:21 +0000 (0:00:00.356) 0:00:36.835 ********** 2025-04-13 01:01:22.567680 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-04-13 01:01:22.567695 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-04-13 01:01:22.567709 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-04-13 01:01:22.567735 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-04-13 01:01:22.567750 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-04-13 01:01:22.567778 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-04-13 01:01:22.568465 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-04-13 01:01:22.568491 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-04-13 01:01:22.568504 | orchestrator | 2025-04-13 01:01:22.568517 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-04-13 01:01:22.568529 | orchestrator | Sunday 13 April 2025 01:00:24 +0000 (0:00:02.723) 0:00:39.558 ********** 2025-04-13 01:01:22.568542 | orchestrator | skipping: [testbed-manager] 2025-04-13 01:01:22.568554 | orchestrator | 2025-04-13 01:01:22.568567 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 01:01:22.568581 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-13 01:01:22.568593 | orchestrator | 2025-04-13 01:01:22.568605 | orchestrator | Sunday 13 April 2025 01:00:24 +0000 (0:00:00.037) 0:00:39.596 ********** 2025-04-13 01:01:22.568618 | orchestrator | =============================================================================== 2025-04-13 01:01:22.568630 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 32.93s 2025-04-13 01:01:22.568642 | orchestrator | Check ceph keys --------------------------------------------------------- 3.01s 2025-04-13 01:01:22.568654 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 2.72s 2025-04-13 01:01:22.568667 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.36s 2025-04-13 01:01:22.568686 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.24s 2025-04-13 01:01:22.568698 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.16s 2025-04-13 01:01:22.568711 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.04s 2025-04-13 01:01:22.568723 | orchestrator | 2025-04-13 01:01:22.568736 | orchestrator | 2025-04-13 01:01:22 | INFO  | Task b48e6a9a-8c07-4fd3-9781-e7f429851087 is in state SUCCESS 2025-04-13 01:01:22.568749 | orchestrator | 2025-04-13 01:01:22 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:01:22.568762 | orchestrator | 2025-04-13 01:01:22 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:01:22.568774 | orchestrator | 2025-04-13 01:01:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:01:22.568792 | orchestrator | 2025-04-13 01:01:22 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:01:22.570480 | orchestrator | 2025-04-13 01:01:22 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:01:25.609002 | orchestrator | 2025-04-13 01:01:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:01:25.609202 | orchestrator | 2025-04-13 01:01:25 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:01:28.648805 | orchestrator | 2025-04-13 01:01:25 | INFO  | Task 8d1fa9bd-3871-4b79-b524-cee4a18fb9be is in state STARTED 2025-04-13 01:01:28.648988 | orchestrator | 2025-04-13 01:01:25 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:01:28.649221 | orchestrator | 2025-04-13 01:01:25 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:01:28.649254 | orchestrator | 2025-04-13 01:01:25 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:01:28.649268 | orchestrator | 2025-04-13 01:01:25 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:01:28.649283 | orchestrator | 2025-04-13 01:01:25 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:01:28.649316 | orchestrator | 2025-04-13 01:01:28 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:01:28.649769 | orchestrator | 2025-04-13 01:01:28 | INFO  | Task 8d1fa9bd-3871-4b79-b524-cee4a18fb9be is in state STARTED 2025-04-13 01:01:28.649800 | orchestrator | 2025-04-13 01:01:28 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:01:28.650271 | orchestrator | 2025-04-13 01:01:28 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:01:28.651510 | orchestrator | 2025-04-13 01:01:28 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:01:28.651973 | orchestrator | 2025-04-13 01:01:28 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:01:31.678733 | orchestrator | 2025-04-13 01:01:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:01:31.678867 | orchestrator | 2025-04-13 01:01:31 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:01:31.680104 | orchestrator | 2025-04-13 01:01:31 | INFO  | Task 8d1fa9bd-3871-4b79-b524-cee4a18fb9be is in state STARTED 2025-04-13 01:01:31.680171 | orchestrator | 2025-04-13 01:01:31 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:01:31.681034 | orchestrator | 2025-04-13 01:01:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:01:31.682375 | orchestrator | 2025-04-13 01:01:31 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:01:31.683579 | orchestrator | 2025-04-13 01:01:31 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:01:34.720163 | orchestrator | 2025-04-13 01:01:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:01:34.720334 | orchestrator | 2025-04-13 01:01:34 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:01:34.721876 | orchestrator | 2025-04-13 01:01:34 | INFO  | Task 8d1fa9bd-3871-4b79-b524-cee4a18fb9be is in state STARTED 2025-04-13 01:01:34.721918 | orchestrator | 2025-04-13 01:01:34 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:01:34.722354 | orchestrator | 2025-04-13 01:01:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:01:34.723284 | orchestrator | 2025-04-13 01:01:34 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:01:34.724276 | orchestrator | 2025-04-13 01:01:34 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:01:37.757463 | orchestrator | 2025-04-13 01:01:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:01:37.757710 | orchestrator | 2025-04-13 01:01:37 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:01:37.759177 | orchestrator | 2025-04-13 01:01:37 | INFO  | Task 8d1fa9bd-3871-4b79-b524-cee4a18fb9be is in state STARTED 2025-04-13 01:01:37.759248 | orchestrator | 2025-04-13 01:01:37 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:01:37.759786 | orchestrator | 2025-04-13 01:01:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:01:37.760887 | orchestrator | 2025-04-13 01:01:37 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:01:37.762354 | orchestrator | 2025-04-13 01:01:37 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:01:40.804489 | orchestrator | 2025-04-13 01:01:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:01:40.804733 | orchestrator | 2025-04-13 01:01:40 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:01:40.805940 | orchestrator | 2025-04-13 01:01:40 | INFO  | Task 8d1fa9bd-3871-4b79-b524-cee4a18fb9be is in state STARTED 2025-04-13 01:01:40.805976 | orchestrator | 2025-04-13 01:01:40 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:01:40.807188 | orchestrator | 2025-04-13 01:01:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:01:40.807580 | orchestrator | 2025-04-13 01:01:40 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:01:40.808589 | orchestrator | 2025-04-13 01:01:40 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:01:43.844986 | orchestrator | 2025-04-13 01:01:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:01:43.845098 | orchestrator | 2025-04-13 01:01:43 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:01:43.846284 | orchestrator | 2025-04-13 01:01:43 | INFO  | Task 8d1fa9bd-3871-4b79-b524-cee4a18fb9be is in state STARTED 2025-04-13 01:01:43.846311 | orchestrator | 2025-04-13 01:01:43 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:01:43.846714 | orchestrator | 2025-04-13 01:01:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:01:43.847348 | orchestrator | 2025-04-13 01:01:43 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:01:43.848953 | orchestrator | 2025-04-13 01:01:43 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:01:46.878863 | orchestrator | 2025-04-13 01:01:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:01:46.879008 | orchestrator | 2025-04-13 01:01:46 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:01:46.879803 | orchestrator | 2025-04-13 01:01:46 | INFO  | Task 8d1fa9bd-3871-4b79-b524-cee4a18fb9be is in state STARTED 2025-04-13 01:01:46.880313 | orchestrator | 2025-04-13 01:01:46 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:01:46.880923 | orchestrator | 2025-04-13 01:01:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:01:46.881412 | orchestrator | 2025-04-13 01:01:46 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:01:46.883918 | orchestrator | 2025-04-13 01:01:46 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:01:49.920848 | orchestrator | 2025-04-13 01:01:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:01:49.921079 | orchestrator | 2025-04-13 01:01:49 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:01:49.921880 | orchestrator | 2025-04-13 01:01:49 | INFO  | Task 8d1fa9bd-3871-4b79-b524-cee4a18fb9be is in state STARTED 2025-04-13 01:01:49.921956 | orchestrator | 2025-04-13 01:01:49 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:01:49.921992 | orchestrator | 2025-04-13 01:01:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:01:49.923368 | orchestrator | 2025-04-13 01:01:49 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:01:49.923900 | orchestrator | 2025-04-13 01:01:49 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:01:52.951590 | orchestrator | 2025-04-13 01:01:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:01:52.951726 | orchestrator | 2025-04-13 01:01:52 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:01:52.952071 | orchestrator | 2025-04-13 01:01:52 | INFO  | Task 8d1fa9bd-3871-4b79-b524-cee4a18fb9be is in state STARTED 2025-04-13 01:01:52.952137 | orchestrator | 2025-04-13 01:01:52 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:01:52.952543 | orchestrator | 2025-04-13 01:01:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:01:52.953217 | orchestrator | 2025-04-13 01:01:52 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:01:52.953701 | orchestrator | 2025-04-13 01:01:52 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:01:52.953802 | orchestrator | 2025-04-13 01:01:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:01:55.985380 | orchestrator | 2025-04-13 01:01:55 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:01:55.985665 | orchestrator | 2025-04-13 01:01:55 | INFO  | Task 8d1fa9bd-3871-4b79-b524-cee4a18fb9be is in state SUCCESS 2025-04-13 01:01:55.985702 | orchestrator | 2025-04-13 01:01:55.985718 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-04-13 01:01:55.985734 | orchestrator | 2025-04-13 01:01:55.985749 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-04-13 01:01:55.985763 | orchestrator | Sunday 13 April 2025 01:00:27 +0000 (0:00:00.165) 0:00:00.165 ********** 2025-04-13 01:01:55.985778 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-04-13 01:01:55.985811 | orchestrator | 2025-04-13 01:01:55.985826 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-04-13 01:01:55.985859 | orchestrator | Sunday 13 April 2025 01:00:27 +0000 (0:00:00.213) 0:00:00.378 ********** 2025-04-13 01:01:55.985874 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-04-13 01:01:55.985889 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-04-13 01:01:55.985903 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-04-13 01:01:55.985918 | orchestrator | 2025-04-13 01:01:55.985932 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-04-13 01:01:55.985946 | orchestrator | Sunday 13 April 2025 01:00:28 +0000 (0:00:01.211) 0:00:01.590 ********** 2025-04-13 01:01:55.985960 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-04-13 01:01:55.985974 | orchestrator | 2025-04-13 01:01:55.985989 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-04-13 01:01:55.986002 | orchestrator | Sunday 13 April 2025 01:00:30 +0000 (0:00:01.133) 0:00:02.723 ********** 2025-04-13 01:01:55.986075 | orchestrator | changed: [testbed-manager] 2025-04-13 01:01:55.986100 | orchestrator | 2025-04-13 01:01:55.986157 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-04-13 01:01:55.986183 | orchestrator | Sunday 13 April 2025 01:00:30 +0000 (0:00:00.853) 0:00:03.577 ********** 2025-04-13 01:01:55.986360 | orchestrator | changed: [testbed-manager] 2025-04-13 01:01:55.986389 | orchestrator | 2025-04-13 01:01:55.986413 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-04-13 01:01:55.986435 | orchestrator | Sunday 13 April 2025 01:00:31 +0000 (0:00:01.022) 0:00:04.599 ********** 2025-04-13 01:01:55.986453 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-04-13 01:01:55.986470 | orchestrator | ok: [testbed-manager] 2025-04-13 01:01:55.986486 | orchestrator | 2025-04-13 01:01:55.986502 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-04-13 01:01:55.986518 | orchestrator | Sunday 13 April 2025 01:01:12 +0000 (0:00:40.218) 0:00:44.817 ********** 2025-04-13 01:01:55.986534 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-04-13 01:01:55.986550 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-04-13 01:01:55.986618 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-04-13 01:01:55.986635 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-04-13 01:01:55.986649 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-04-13 01:01:55.986663 | orchestrator | 2025-04-13 01:01:55.986677 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-04-13 01:01:55.986691 | orchestrator | Sunday 13 April 2025 01:01:16 +0000 (0:00:04.290) 0:00:49.107 ********** 2025-04-13 01:01:55.986705 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-04-13 01:01:55.986718 | orchestrator | 2025-04-13 01:01:55.986732 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-04-13 01:01:55.986746 | orchestrator | Sunday 13 April 2025 01:01:16 +0000 (0:00:00.465) 0:00:49.573 ********** 2025-04-13 01:01:55.986759 | orchestrator | skipping: [testbed-manager] 2025-04-13 01:01:55.986781 | orchestrator | 2025-04-13 01:01:55.986795 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-04-13 01:01:55.986809 | orchestrator | Sunday 13 April 2025 01:01:17 +0000 (0:00:00.122) 0:00:49.695 ********** 2025-04-13 01:01:55.986823 | orchestrator | skipping: [testbed-manager] 2025-04-13 01:01:55.986836 | orchestrator | 2025-04-13 01:01:55.986850 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-04-13 01:01:55.986864 | orchestrator | Sunday 13 April 2025 01:01:17 +0000 (0:00:00.305) 0:00:50.001 ********** 2025-04-13 01:01:55.986878 | orchestrator | changed: [testbed-manager] 2025-04-13 01:01:55.986892 | orchestrator | 2025-04-13 01:01:55.986906 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-04-13 01:01:55.986920 | orchestrator | Sunday 13 April 2025 01:01:18 +0000 (0:00:01.498) 0:00:51.499 ********** 2025-04-13 01:01:55.986933 | orchestrator | changed: [testbed-manager] 2025-04-13 01:01:55.986948 | orchestrator | 2025-04-13 01:01:55.986961 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-04-13 01:01:55.986975 | orchestrator | Sunday 13 April 2025 01:01:20 +0000 (0:00:01.156) 0:00:52.656 ********** 2025-04-13 01:01:55.986989 | orchestrator | changed: [testbed-manager] 2025-04-13 01:01:55.987003 | orchestrator | 2025-04-13 01:01:55.987016 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-04-13 01:01:55.987030 | orchestrator | Sunday 13 April 2025 01:01:20 +0000 (0:00:00.452) 0:00:53.109 ********** 2025-04-13 01:01:55.987044 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-04-13 01:01:55.987138 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-04-13 01:01:55.987309 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-04-13 01:01:55.987326 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-04-13 01:01:55.987340 | orchestrator | 2025-04-13 01:01:55.987354 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 01:01:55.987369 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-13 01:01:55.987385 | orchestrator | 2025-04-13 01:01:55.987415 | orchestrator | Sunday 13 April 2025 01:01:21 +0000 (0:00:01.183) 0:00:54.293 ********** 2025-04-13 01:01:55.988064 | orchestrator | =============================================================================== 2025-04-13 01:01:55.988094 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.22s 2025-04-13 01:01:55.988134 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.29s 2025-04-13 01:01:55.988149 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.50s 2025-04-13 01:01:55.988164 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.21s 2025-04-13 01:01:55.988178 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.18s 2025-04-13 01:01:55.988192 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 1.16s 2025-04-13 01:01:55.988206 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.13s 2025-04-13 01:01:55.988221 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.02s 2025-04-13 01:01:55.988235 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.85s 2025-04-13 01:01:55.988249 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.47s 2025-04-13 01:01:55.988262 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.45s 2025-04-13 01:01:55.988276 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.31s 2025-04-13 01:01:55.988288 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2025-04-13 01:01:55.988300 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2025-04-13 01:01:55.988322 | orchestrator | 2025-04-13 01:01:55.988344 | orchestrator | 2025-04-13 01:01:55 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:01:55.988364 | orchestrator | 2025-04-13 01:01:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:01:55.988392 | orchestrator | 2025-04-13 01:01:55 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:01:55.989594 | orchestrator | 2025-04-13 01:01:55 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:01:59.014759 | orchestrator | 2025-04-13 01:01:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:01:59.014909 | orchestrator | 2025-04-13 01:01:59 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:01:59.015243 | orchestrator | 2025-04-13 01:01:59 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:01:59.015992 | orchestrator | 2025-04-13 01:01:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:01:59.016766 | orchestrator | 2025-04-13 01:01:59 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:01:59.018407 | orchestrator | 2025-04-13 01:01:59 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:02:02.065691 | orchestrator | 2025-04-13 01:01:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:02:02.065961 | orchestrator | 2025-04-13 01:02:02 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:02:02.067031 | orchestrator | 2025-04-13 01:02:02 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:02:02.067074 | orchestrator | 2025-04-13 01:02:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:02:02.067098 | orchestrator | 2025-04-13 01:02:02 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:02:02.067981 | orchestrator | 2025-04-13 01:02:02 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:02:05.102494 | orchestrator | 2025-04-13 01:02:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:02:05.102742 | orchestrator | 2025-04-13 01:02:05 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:02:05.105208 | orchestrator | 2025-04-13 01:02:05 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:02:05.105274 | orchestrator | 2025-04-13 01:02:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:02:05.105839 | orchestrator | 2025-04-13 01:02:05 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:02:05.106716 | orchestrator | 2025-04-13 01:02:05 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:02:08.139015 | orchestrator | 2025-04-13 01:02:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:02:08.139188 | orchestrator | 2025-04-13 01:02:08 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:02:08.140398 | orchestrator | 2025-04-13 01:02:08 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:02:08.140460 | orchestrator | 2025-04-13 01:02:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:02:08.140980 | orchestrator | 2025-04-13 01:02:08 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:02:08.144436 | orchestrator | 2025-04-13 01:02:08 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:02:11.171293 | orchestrator | 2025-04-13 01:02:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:02:11.171474 | orchestrator | 2025-04-13 01:02:11 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:02:11.171994 | orchestrator | 2025-04-13 01:02:11 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:02:11.172045 | orchestrator | 2025-04-13 01:02:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:02:11.176907 | orchestrator | 2025-04-13 01:02:11 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:02:11.177326 | orchestrator | 2025-04-13 01:02:11 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:02:14.211627 | orchestrator | 2025-04-13 01:02:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:02:14.211765 | orchestrator | 2025-04-13 01:02:14 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:02:14.212282 | orchestrator | 2025-04-13 01:02:14 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:02:14.213351 | orchestrator | 2025-04-13 01:02:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:02:14.214504 | orchestrator | 2025-04-13 01:02:14 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:02:14.222193 | orchestrator | 2025-04-13 01:02:14 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:02:14.222486 | orchestrator | 2025-04-13 01:02:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:02:17.258371 | orchestrator | 2025-04-13 01:02:17 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:02:17.260580 | orchestrator | 2025-04-13 01:02:17 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state STARTED 2025-04-13 01:02:17.261700 | orchestrator | 2025-04-13 01:02:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:02:17.262327 | orchestrator | 2025-04-13 01:02:17 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:02:17.263072 | orchestrator | 2025-04-13 01:02:17 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:02:17.263993 | orchestrator | 2025-04-13 01:02:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:02:20.305588 | orchestrator | 2025-04-13 01:02:20 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:02:20.306729 | orchestrator | 2025-04-13 01:02:20 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:02:20.307934 | orchestrator | 2025-04-13 01:02:20 | INFO  | Task 85038687-faf6-4abd-98b8-def7ef964de6 is in state SUCCESS 2025-04-13 01:02:20.308684 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-13 01:02:20.308792 | orchestrator | 2025-04-13 01:02:20.308813 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-04-13 01:02:20.308827 | orchestrator | 2025-04-13 01:02:20.308840 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-04-13 01:02:20.308853 | orchestrator | Sunday 13 April 2025 01:01:25 +0000 (0:00:00.410) 0:00:00.410 ********** 2025-04-13 01:02:20.308865 | orchestrator | changed: [testbed-manager] 2025-04-13 01:02:20.308879 | orchestrator | 2025-04-13 01:02:20.308891 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-04-13 01:02:20.308904 | orchestrator | Sunday 13 April 2025 01:01:26 +0000 (0:00:01.330) 0:00:01.740 ********** 2025-04-13 01:02:20.308916 | orchestrator | changed: [testbed-manager] 2025-04-13 01:02:20.308928 | orchestrator | 2025-04-13 01:02:20.308940 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-04-13 01:02:20.308953 | orchestrator | Sunday 13 April 2025 01:01:27 +0000 (0:00:00.842) 0:00:02.583 ********** 2025-04-13 01:02:20.308965 | orchestrator | changed: [testbed-manager] 2025-04-13 01:02:20.308977 | orchestrator | 2025-04-13 01:02:20.308990 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-04-13 01:02:20.309002 | orchestrator | Sunday 13 April 2025 01:01:28 +0000 (0:00:00.780) 0:00:03.364 ********** 2025-04-13 01:02:20.309014 | orchestrator | changed: [testbed-manager] 2025-04-13 01:02:20.309027 | orchestrator | 2025-04-13 01:02:20.309039 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-04-13 01:02:20.309051 | orchestrator | Sunday 13 April 2025 01:01:29 +0000 (0:00:00.910) 0:00:04.274 ********** 2025-04-13 01:02:20.309063 | orchestrator | changed: [testbed-manager] 2025-04-13 01:02:20.309076 | orchestrator | 2025-04-13 01:02:20.309088 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-04-13 01:02:20.309125 | orchestrator | Sunday 13 April 2025 01:01:29 +0000 (0:00:00.919) 0:00:05.193 ********** 2025-04-13 01:02:20.309140 | orchestrator | changed: [testbed-manager] 2025-04-13 01:02:20.309153 | orchestrator | 2025-04-13 01:02:20.309165 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-04-13 01:02:20.309178 | orchestrator | Sunday 13 April 2025 01:01:30 +0000 (0:00:00.897) 0:00:06.091 ********** 2025-04-13 01:02:20.309190 | orchestrator | changed: [testbed-manager] 2025-04-13 01:02:20.309203 | orchestrator | 2025-04-13 01:02:20.309215 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-04-13 01:02:20.309228 | orchestrator | Sunday 13 April 2025 01:01:32 +0000 (0:00:01.179) 0:00:07.270 ********** 2025-04-13 01:02:20.309243 | orchestrator | changed: [testbed-manager] 2025-04-13 01:02:20.309387 | orchestrator | 2025-04-13 01:02:20.309401 | orchestrator | TASK [Create admin user] ******************************************************* 2025-04-13 01:02:20.309413 | orchestrator | Sunday 13 April 2025 01:01:33 +0000 (0:00:01.111) 0:00:08.382 ********** 2025-04-13 01:02:20.309426 | orchestrator | changed: [testbed-manager] 2025-04-13 01:02:20.309438 | orchestrator | 2025-04-13 01:02:20.309451 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-04-13 01:02:20.309463 | orchestrator | Sunday 13 April 2025 01:01:49 +0000 (0:00:16.821) 0:00:25.203 ********** 2025-04-13 01:02:20.309497 | orchestrator | skipping: [testbed-manager] 2025-04-13 01:02:20.309510 | orchestrator | 2025-04-13 01:02:20.309523 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-04-13 01:02:20.309535 | orchestrator | 2025-04-13 01:02:20.309547 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-04-13 01:02:20.309560 | orchestrator | Sunday 13 April 2025 01:01:50 +0000 (0:00:00.676) 0:00:25.880 ********** 2025-04-13 01:02:20.309572 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:02:20.309584 | orchestrator | 2025-04-13 01:02:20.309597 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-04-13 01:02:20.309609 | orchestrator | 2025-04-13 01:02:20.309621 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-04-13 01:02:20.309633 | orchestrator | Sunday 13 April 2025 01:01:52 +0000 (0:00:01.965) 0:00:27.846 ********** 2025-04-13 01:02:20.309646 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:02:20.309658 | orchestrator | 2025-04-13 01:02:20.309670 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-04-13 01:02:20.309682 | orchestrator | 2025-04-13 01:02:20.309695 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-04-13 01:02:20.309707 | orchestrator | Sunday 13 April 2025 01:01:54 +0000 (0:00:01.603) 0:00:29.449 ********** 2025-04-13 01:02:20.309719 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:02:20.309732 | orchestrator | 2025-04-13 01:02:20.309744 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 01:02:20.309757 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-13 01:02:20.309771 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 01:02:20.309784 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 01:02:20.309796 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 01:02:20.309808 | orchestrator | 2025-04-13 01:02:20.309821 | orchestrator | 2025-04-13 01:02:20.309833 | orchestrator | 2025-04-13 01:02:20.309845 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 01:02:20.309857 | orchestrator | Sunday 13 April 2025 01:01:55 +0000 (0:00:01.347) 0:00:30.797 ********** 2025-04-13 01:02:20.309869 | orchestrator | =============================================================================== 2025-04-13 01:02:20.309882 | orchestrator | Create admin user ------------------------------------------------------ 16.82s 2025-04-13 01:02:20.309906 | orchestrator | Restart ceph manager service -------------------------------------------- 4.92s 2025-04-13 01:02:20.310683 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.33s 2025-04-13 01:02:20.310721 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.18s 2025-04-13 01:02:20.310734 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.11s 2025-04-13 01:02:20.310746 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.92s 2025-04-13 01:02:20.310759 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 0.91s 2025-04-13 01:02:20.310771 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.90s 2025-04-13 01:02:20.310783 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.84s 2025-04-13 01:02:20.310795 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.78s 2025-04-13 01:02:20.310816 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.68s 2025-04-13 01:02:20.310829 | orchestrator | 2025-04-13 01:02:20.310871 | orchestrator | 2025-04-13 01:02:20.310886 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 01:02:20.310911 | orchestrator | 2025-04-13 01:02:20.310924 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 01:02:20.310936 | orchestrator | Sunday 13 April 2025 01:00:16 +0000 (0:00:00.361) 0:00:00.361 ********** 2025-04-13 01:02:20.310948 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:02:20.310961 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:02:20.310974 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:02:20.310986 | orchestrator | 2025-04-13 01:02:20.310998 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 01:02:20.311010 | orchestrator | Sunday 13 April 2025 01:00:17 +0000 (0:00:00.502) 0:00:00.864 ********** 2025-04-13 01:02:20.311023 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-04-13 01:02:20.311035 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-04-13 01:02:20.311048 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-04-13 01:02:20.311060 | orchestrator | 2025-04-13 01:02:20.311072 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-04-13 01:02:20.311084 | orchestrator | 2025-04-13 01:02:20.311097 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-04-13 01:02:20.311129 | orchestrator | Sunday 13 April 2025 01:00:17 +0000 (0:00:00.346) 0:00:01.211 ********** 2025-04-13 01:02:20.311142 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:02:20.311156 | orchestrator | 2025-04-13 01:02:20.311168 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-04-13 01:02:20.311180 | orchestrator | Sunday 13 April 2025 01:00:18 +0000 (0:00:00.829) 0:00:02.040 ********** 2025-04-13 01:02:20.311193 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-04-13 01:02:20.311205 | orchestrator | 2025-04-13 01:02:20.311217 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-04-13 01:02:20.311230 | orchestrator | Sunday 13 April 2025 01:00:21 +0000 (0:00:03.496) 0:00:05.537 ********** 2025-04-13 01:02:20.311242 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-04-13 01:02:20.311255 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-04-13 01:02:20.311267 | orchestrator | 2025-04-13 01:02:20.311282 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-04-13 01:02:20.311296 | orchestrator | Sunday 13 April 2025 01:00:28 +0000 (0:00:06.806) 0:00:12.343 ********** 2025-04-13 01:02:20.311310 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-04-13 01:02:20.311324 | orchestrator | 2025-04-13 01:02:20.311338 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-04-13 01:02:20.311352 | orchestrator | Sunday 13 April 2025 01:00:32 +0000 (0:00:03.433) 0:00:15.776 ********** 2025-04-13 01:02:20.311366 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-13 01:02:20.311380 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-04-13 01:02:20.311393 | orchestrator | 2025-04-13 01:02:20.311407 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-04-13 01:02:20.311421 | orchestrator | Sunday 13 April 2025 01:00:36 +0000 (0:00:04.031) 0:00:19.808 ********** 2025-04-13 01:02:20.311436 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-13 01:02:20.311450 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-04-13 01:02:20.311462 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-04-13 01:02:20.311475 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-04-13 01:02:20.311487 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-04-13 01:02:20.311499 | orchestrator | 2025-04-13 01:02:20.311511 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-04-13 01:02:20.311524 | orchestrator | Sunday 13 April 2025 01:00:51 +0000 (0:00:15.396) 0:00:35.204 ********** 2025-04-13 01:02:20.311545 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-04-13 01:02:20.311557 | orchestrator | 2025-04-13 01:02:20.311569 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-04-13 01:02:20.311582 | orchestrator | Sunday 13 April 2025 01:00:55 +0000 (0:00:04.151) 0:00:39.356 ********** 2025-04-13 01:02:20.311596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-13 01:02:20.311737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-13 01:02:20.311760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-13 01:02:20.311775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.311798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.311812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.311834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.311861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.311875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.311887 | orchestrator | 2025-04-13 01:02:20.311900 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-04-13 01:02:20.311913 | orchestrator | Sunday 13 April 2025 01:00:57 +0000 (0:00:02.090) 0:00:41.446 ********** 2025-04-13 01:02:20.311925 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-04-13 01:02:20.311938 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-04-13 01:02:20.311950 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-04-13 01:02:20.311963 | orchestrator | 2025-04-13 01:02:20.311975 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-04-13 01:02:20.311994 | orchestrator | Sunday 13 April 2025 01:01:00 +0000 (0:00:02.719) 0:00:44.165 ********** 2025-04-13 01:02:20.312006 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:02:20.312019 | orchestrator | 2025-04-13 01:02:20.312031 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-04-13 01:02:20.312043 | orchestrator | Sunday 13 April 2025 01:01:00 +0000 (0:00:00.301) 0:00:44.466 ********** 2025-04-13 01:02:20.312056 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:02:20.312068 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:02:20.312080 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:02:20.312092 | orchestrator | 2025-04-13 01:02:20.312121 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-04-13 01:02:20.312134 | orchestrator | Sunday 13 April 2025 01:01:01 +0000 (0:00:00.978) 0:00:45.445 ********** 2025-04-13 01:02:20.312147 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:02:20.312159 | orchestrator | 2025-04-13 01:02:20.312172 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-04-13 01:02:20.312191 | orchestrator | Sunday 13 April 2025 01:01:03 +0000 (0:00:01.256) 0:00:46.702 ********** 2025-04-13 01:02:20.312205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-13 01:02:20.312226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-13 01:02:20.312241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-13 01:02:20.312261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.312275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.312288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.312307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.312321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.312334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.312352 | orchestrator | 2025-04-13 01:02:20.312365 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-04-13 01:02:20.312377 | orchestrator | Sunday 13 April 2025 01:01:06 +0000 (0:00:03.644) 0:00:50.347 ********** 2025-04-13 01:02:20.312390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-13 01:02:20.312405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-13 01:02:20.312425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-13 01:02:20.312439 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:02:20.312452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-13 01:02:20.312471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-13 01:02:20.312484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-13 01:02:20.312497 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:02:20.312510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-13 01:02:20.312531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-13 01:02:20.312544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-13 01:02:20.312557 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:02:20.312570 | orchestrator | 2025-04-13 01:02:20.312582 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-04-13 01:02:20.312595 | orchestrator | Sunday 13 April 2025 01:01:08 +0000 (0:00:01.842) 0:00:52.189 ********** 2025-04-13 01:02:20.312620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-13 01:02:20.312634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-13 01:02:20.312647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-13 01:02:20.312660 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:02:20.312678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-13 01:02:20.312692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-13 01:02:20.312711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-13 01:02:20.312723 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:02:20.312737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-13 01:02:20.312750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-13 01:02:20.312764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-13 01:02:20.312776 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:02:20.312789 | orchestrator | 2025-04-13 01:02:20.312801 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-04-13 01:02:20.312818 | orchestrator | Sunday 13 April 2025 01:01:09 +0000 (0:00:01.414) 0:00:53.604 ********** 2025-04-13 01:02:20.312832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-13 01:02:20.312857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-13 01:02:20.312871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.312885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-13 01:02:20.312904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.312918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.312937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.312950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.312963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.312976 | orchestrator | 2025-04-13 01:02:20.312988 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-04-13 01:02:20.313001 | orchestrator | Sunday 13 April 2025 01:01:15 +0000 (0:00:05.126) 0:00:58.731 ********** 2025-04-13 01:02:20.313013 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:02:20.313026 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:02:20.313038 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:02:20.313050 | orchestrator | 2025-04-13 01:02:20.313063 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-04-13 01:02:20.313082 | orchestrator | Sunday 13 April 2025 01:01:18 +0000 (0:00:03.779) 0:01:02.510 ********** 2025-04-13 01:02:20.313102 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-13 01:02:20.313156 | orchestrator | 2025-04-13 01:02:20.313177 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-04-13 01:02:20.313196 | orchestrator | Sunday 13 April 2025 01:01:20 +0000 (0:00:01.580) 0:01:04.091 ********** 2025-04-13 01:02:20.313209 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:02:20.313221 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:02:20.313234 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:02:20.313246 | orchestrator | 2025-04-13 01:02:20.313258 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-04-13 01:02:20.313270 | orchestrator | Sunday 13 April 2025 01:01:21 +0000 (0:00:01.318) 0:01:05.409 ********** 2025-04-13 01:02:20.313299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-13 01:02:20.313314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-13 01:02:20.313328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-13 01:02:20.313341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.313355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.313380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.313393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.313406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.313419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.313432 | orchestrator | 2025-04-13 01:02:20.313445 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-04-13 01:02:20.313457 | orchestrator | Sunday 13 April 2025 01:01:32 +0000 (0:00:10.651) 0:01:16.061 ********** 2025-04-13 01:02:20.313471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-13 01:02:20.313497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-13 01:02:20.313511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-13 01:02:20.313523 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:02:20.313536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-13 01:02:20.313550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-13 01:02:20.313563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-13 01:02:20.313589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-13 01:02:20.313603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-13 01:02:20.313615 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:02:20.313628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-13 01:02:20.313641 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:02:20.313653 | orchestrator | 2025-04-13 01:02:20.313666 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-04-13 01:02:20.313679 | orchestrator | Sunday 13 April 2025 01:01:34 +0000 (0:00:01.637) 0:01:17.698 ********** 2025-04-13 01:02:20.313692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-13 01:02:20.313706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-13 01:02:20.313733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-13 01:02:20.313749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.313793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.313808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.313821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.313840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.313860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:02:20.313873 | orchestrator | 2025-04-13 01:02:20.313886 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-04-13 01:02:20.313898 | orchestrator | Sunday 13 April 2025 01:01:37 +0000 (0:00:03.095) 0:01:20.793 ********** 2025-04-13 01:02:20.313911 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:02:20.313924 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:02:20.313936 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:02:20.313948 | orchestrator | 2025-04-13 01:02:20.313961 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-04-13 01:02:20.313973 | orchestrator | Sunday 13 April 2025 01:01:37 +0000 (0:00:00.583) 0:01:21.377 ********** 2025-04-13 01:02:20.313986 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:02:20.313998 | orchestrator | 2025-04-13 01:02:20.314010 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-04-13 01:02:20.314185 | orchestrator | Sunday 13 April 2025 01:01:40 +0000 (0:00:03.156) 0:01:24.534 ********** 2025-04-13 01:02:20.314199 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:02:20.314218 | orchestrator | 2025-04-13 01:02:20.314230 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-04-13 01:02:20.314243 | orchestrator | Sunday 13 April 2025 01:01:43 +0000 (0:00:02.283) 0:01:26.817 ********** 2025-04-13 01:02:20.314255 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:02:20.314267 | orchestrator | 2025-04-13 01:02:20.314280 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-04-13 01:02:20.314292 | orchestrator | Sunday 13 April 2025 01:01:54 +0000 (0:00:10.892) 0:01:37.710 ********** 2025-04-13 01:02:20.314304 | orchestrator | 2025-04-13 01:02:20.314317 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-04-13 01:02:20.314329 | orchestrator | Sunday 13 April 2025 01:01:54 +0000 (0:00:00.169) 0:01:37.879 ********** 2025-04-13 01:02:20.314341 | orchestrator | 2025-04-13 01:02:20.314354 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-04-13 01:02:20.314371 | orchestrator | Sunday 13 April 2025 01:01:54 +0000 (0:00:00.164) 0:01:38.044 ********** 2025-04-13 01:02:20.314384 | orchestrator | 2025-04-13 01:02:20.314397 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-04-13 01:02:20.314417 | orchestrator | Sunday 13 April 2025 01:01:54 +0000 (0:00:00.050) 0:01:38.094 ********** 2025-04-13 01:02:20.314430 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:02:20.314442 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:02:20.314455 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:02:20.314467 | orchestrator | 2025-04-13 01:02:20.314480 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-04-13 01:02:20.314492 | orchestrator | Sunday 13 April 2025 01:02:03 +0000 (0:00:09.342) 0:01:47.436 ********** 2025-04-13 01:02:20.314504 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:02:20.314516 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:02:20.314528 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:02:20.314541 | orchestrator | 2025-04-13 01:02:20.314553 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-04-13 01:02:20.314566 | orchestrator | Sunday 13 April 2025 01:02:10 +0000 (0:00:06.272) 0:01:53.709 ********** 2025-04-13 01:02:20.314578 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:02:20.314590 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:02:20.314602 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:02:20.314614 | orchestrator | 2025-04-13 01:02:20.314627 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 01:02:20.314639 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-13 01:02:20.314652 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-13 01:02:20.314665 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-13 01:02:20.314678 | orchestrator | 2025-04-13 01:02:20.314690 | orchestrator | 2025-04-13 01:02:20.314702 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 01:02:20.314715 | orchestrator | Sunday 13 April 2025 01:02:17 +0000 (0:00:07.158) 0:02:00.867 ********** 2025-04-13 01:02:20.314727 | orchestrator | =============================================================================== 2025-04-13 01:02:20.314739 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.40s 2025-04-13 01:02:20.314752 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 10.89s 2025-04-13 01:02:20.314764 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.65s 2025-04-13 01:02:20.314776 | orchestrator | barbican : Restart barbican-api container ------------------------------- 9.34s 2025-04-13 01:02:20.314788 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 7.16s 2025-04-13 01:02:20.314801 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.81s 2025-04-13 01:02:20.314813 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 6.27s 2025-04-13 01:02:20.314833 | orchestrator | barbican : Copying over config.json files for services ------------------ 5.13s 2025-04-13 01:02:20.316483 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.15s 2025-04-13 01:02:20.316507 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.03s 2025-04-13 01:02:20.316518 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.78s 2025-04-13 01:02:20.316528 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.64s 2025-04-13 01:02:20.316539 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.50s 2025-04-13 01:02:20.316550 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.43s 2025-04-13 01:02:20.316561 | orchestrator | barbican : Creating barbican database ----------------------------------- 3.16s 2025-04-13 01:02:20.316572 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.10s 2025-04-13 01:02:20.316592 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.72s 2025-04-13 01:02:20.316604 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.28s 2025-04-13 01:02:20.316615 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.09s 2025-04-13 01:02:20.316626 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.84s 2025-04-13 01:02:20.316638 | orchestrator | 2025-04-13 01:02:20 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:02:20.316659 | orchestrator | 2025-04-13 01:02:20 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:02:20.319264 | orchestrator | 2025-04-13 01:02:20 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:02:23.346670 | orchestrator | 2025-04-13 01:02:20 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:02:23.346810 | orchestrator | 2025-04-13 01:02:23 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:02:23.347172 | orchestrator | 2025-04-13 01:02:23 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:02:23.347215 | orchestrator | 2025-04-13 01:02:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:02:23.347684 | orchestrator | 2025-04-13 01:02:23 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:02:23.348245 | orchestrator | 2025-04-13 01:02:23 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:02:23.349072 | orchestrator | 2025-04-13 01:02:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:02:26.376747 | orchestrator | 2025-04-13 01:02:26 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:02:26.377197 | orchestrator | 2025-04-13 01:02:26 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:02:26.378132 | orchestrator | 2025-04-13 01:02:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:02:26.379301 | orchestrator | 2025-04-13 01:02:26 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:02:26.380154 | orchestrator | 2025-04-13 01:02:26 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:02:26.380269 | orchestrator | 2025-04-13 01:02:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:02:29.430559 | orchestrator | 2025-04-13 01:02:29 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:02:29.430877 | orchestrator | 2025-04-13 01:02:29 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:02:29.432520 | orchestrator | 2025-04-13 01:02:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:02:29.433141 | orchestrator | 2025-04-13 01:02:29 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:02:29.441001 | orchestrator | 2025-04-13 01:02:29 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:02:32.477870 | orchestrator | 2025-04-13 01:02:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:02:32.477985 | orchestrator | 2025-04-13 01:02:32 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:02:32.478466 | orchestrator | 2025-04-13 01:02:32 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:02:32.479779 | orchestrator | 2025-04-13 01:02:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:02:32.480622 | orchestrator | 2025-04-13 01:02:32 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:02:32.481648 | orchestrator | 2025-04-13 01:02:32 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:02:35.513711 | orchestrator | 2025-04-13 01:02:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:02:35.513845 | orchestrator | 2025-04-13 01:02:35 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:02:35.514148 | orchestrator | 2025-04-13 01:02:35 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:02:35.514976 | orchestrator | 2025-04-13 01:02:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:02:35.516014 | orchestrator | 2025-04-13 01:02:35 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:02:35.516971 | orchestrator | 2025-04-13 01:02:35 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:02:38.546393 | orchestrator | 2025-04-13 01:02:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:02:38.546531 | orchestrator | 2025-04-13 01:02:38 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:02:38.546651 | orchestrator | 2025-04-13 01:02:38 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:02:38.546679 | orchestrator | 2025-04-13 01:02:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:02:38.547404 | orchestrator | 2025-04-13 01:02:38 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:02:38.547943 | orchestrator | 2025-04-13 01:02:38 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:02:41.587867 | orchestrator | 2025-04-13 01:02:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:02:41.588012 | orchestrator | 2025-04-13 01:02:41 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:02:41.591259 | orchestrator | 2025-04-13 01:02:41 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:02:41.592568 | orchestrator | 2025-04-13 01:02:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:02:41.594677 | orchestrator | 2025-04-13 01:02:41 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:02:41.596304 | orchestrator | 2025-04-13 01:02:41 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:02:41.596425 | orchestrator | 2025-04-13 01:02:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:02:44.625706 | orchestrator | 2025-04-13 01:02:44 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:02:44.626851 | orchestrator | 2025-04-13 01:02:44 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:02:44.627483 | orchestrator | 2025-04-13 01:02:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:02:44.628060 | orchestrator | 2025-04-13 01:02:44 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:02:44.629473 | orchestrator | 2025-04-13 01:02:44 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:02:47.672534 | orchestrator | 2025-04-13 01:02:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:02:47.672678 | orchestrator | 2025-04-13 01:02:47 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:02:47.673145 | orchestrator | 2025-04-13 01:02:47 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:02:47.674005 | orchestrator | 2025-04-13 01:02:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:02:47.675315 | orchestrator | 2025-04-13 01:02:47 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:02:47.676624 | orchestrator | 2025-04-13 01:02:47 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:02:50.721395 | orchestrator | 2025-04-13 01:02:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:02:50.721645 | orchestrator | 2025-04-13 01:02:50 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:02:50.723173 | orchestrator | 2025-04-13 01:02:50 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:02:50.723263 | orchestrator | 2025-04-13 01:02:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:02:50.723484 | orchestrator | 2025-04-13 01:02:50 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:02:50.723517 | orchestrator | 2025-04-13 01:02:50 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:02:53.748301 | orchestrator | 2025-04-13 01:02:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:02:53.748453 | orchestrator | 2025-04-13 01:02:53 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:02:53.749058 | orchestrator | 2025-04-13 01:02:53 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:02:53.749098 | orchestrator | 2025-04-13 01:02:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:02:53.749664 | orchestrator | 2025-04-13 01:02:53 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:02:53.750613 | orchestrator | 2025-04-13 01:02:53 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:02:56.783869 | orchestrator | 2025-04-13 01:02:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:02:56.784007 | orchestrator | 2025-04-13 01:02:56 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:02:56.784826 | orchestrator | 2025-04-13 01:02:56 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:02:56.784857 | orchestrator | 2025-04-13 01:02:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:02:56.784872 | orchestrator | 2025-04-13 01:02:56 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:02:56.784894 | orchestrator | 2025-04-13 01:02:56 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:02:56.785129 | orchestrator | 2025-04-13 01:02:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:02:59.836702 | orchestrator | 2025-04-13 01:02:59 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:02:59.837220 | orchestrator | 2025-04-13 01:02:59 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:02:59.837950 | orchestrator | 2025-04-13 01:02:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:02:59.838787 | orchestrator | 2025-04-13 01:02:59 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:02:59.839386 | orchestrator | 2025-04-13 01:02:59 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:03:02.891347 | orchestrator | 2025-04-13 01:02:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:03:02.891513 | orchestrator | 2025-04-13 01:03:02 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:03:02.893231 | orchestrator | 2025-04-13 01:03:02 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:03:02.893267 | orchestrator | 2025-04-13 01:03:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:03:02.893736 | orchestrator | 2025-04-13 01:03:02 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:03:02.896768 | orchestrator | 2025-04-13 01:03:02 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:03:05.937338 | orchestrator | 2025-04-13 01:03:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:03:05.937448 | orchestrator | 2025-04-13 01:03:05 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:03:05.939404 | orchestrator | 2025-04-13 01:03:05 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:03:05.942277 | orchestrator | 2025-04-13 01:03:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:03:05.944341 | orchestrator | 2025-04-13 01:03:05 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:03:05.946966 | orchestrator | 2025-04-13 01:03:05 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:03:05.947214 | orchestrator | 2025-04-13 01:03:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:03:08.998978 | orchestrator | 2025-04-13 01:03:08 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:03:09.000732 | orchestrator | 2025-04-13 01:03:08 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:03:09.004204 | orchestrator | 2025-04-13 01:03:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:03:09.006158 | orchestrator | 2025-04-13 01:03:09 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:03:09.006855 | orchestrator | 2025-04-13 01:03:09 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:03:12.058604 | orchestrator | 2025-04-13 01:03:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:03:12.058753 | orchestrator | 2025-04-13 01:03:12 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:03:12.059081 | orchestrator | 2025-04-13 01:03:12 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:03:12.060424 | orchestrator | 2025-04-13 01:03:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:03:12.061685 | orchestrator | 2025-04-13 01:03:12 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:03:12.062828 | orchestrator | 2025-04-13 01:03:12 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:03:15.135495 | orchestrator | 2025-04-13 01:03:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:03:15.135632 | orchestrator | 2025-04-13 01:03:15 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:03:15.136571 | orchestrator | 2025-04-13 01:03:15 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:03:15.138275 | orchestrator | 2025-04-13 01:03:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:03:15.139522 | orchestrator | 2025-04-13 01:03:15 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:03:15.141413 | orchestrator | 2025-04-13 01:03:15 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:03:18.193689 | orchestrator | 2025-04-13 01:03:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:03:18.193833 | orchestrator | 2025-04-13 01:03:18 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:03:18.195071 | orchestrator | 2025-04-13 01:03:18 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:03:18.195158 | orchestrator | 2025-04-13 01:03:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:03:18.196605 | orchestrator | 2025-04-13 01:03:18 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:03:18.197736 | orchestrator | 2025-04-13 01:03:18 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:03:21.274787 | orchestrator | 2025-04-13 01:03:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:03:21.274927 | orchestrator | 2025-04-13 01:03:21 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:03:21.277276 | orchestrator | 2025-04-13 01:03:21 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:03:21.279359 | orchestrator | 2025-04-13 01:03:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:03:21.281481 | orchestrator | 2025-04-13 01:03:21 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:03:21.284050 | orchestrator | 2025-04-13 01:03:21 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:03:21.284102 | orchestrator | 2025-04-13 01:03:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:03:24.335163 | orchestrator | 2025-04-13 01:03:24 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:03:24.336583 | orchestrator | 2025-04-13 01:03:24 | INFO  | Task c576580c-c4aa-4040-bb91-9752c3d332c6 is in state STARTED 2025-04-13 01:03:24.338461 | orchestrator | 2025-04-13 01:03:24 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:03:24.339973 | orchestrator | 2025-04-13 01:03:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:03:24.341592 | orchestrator | 2025-04-13 01:03:24 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:03:24.343271 | orchestrator | 2025-04-13 01:03:24 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:03:27.403084 | orchestrator | 2025-04-13 01:03:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:03:27.403271 | orchestrator | 2025-04-13 01:03:27 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:03:27.403415 | orchestrator | 2025-04-13 01:03:27 | INFO  | Task c576580c-c4aa-4040-bb91-9752c3d332c6 is in state STARTED 2025-04-13 01:03:27.404500 | orchestrator | 2025-04-13 01:03:27 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:03:27.406845 | orchestrator | 2025-04-13 01:03:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:03:27.408075 | orchestrator | 2025-04-13 01:03:27 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:03:27.410335 | orchestrator | 2025-04-13 01:03:27 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:03:27.410463 | orchestrator | 2025-04-13 01:03:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:03:30.464228 | orchestrator | 2025-04-13 01:03:30 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state STARTED 2025-04-13 01:03:30.465492 | orchestrator | 2025-04-13 01:03:30 | INFO  | Task c576580c-c4aa-4040-bb91-9752c3d332c6 is in state STARTED 2025-04-13 01:03:30.466440 | orchestrator | 2025-04-13 01:03:30 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:03:30.467164 | orchestrator | 2025-04-13 01:03:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:03:30.471701 | orchestrator | 2025-04-13 01:03:30 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:03:30.473315 | orchestrator | 2025-04-13 01:03:30 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:03:33.523645 | orchestrator | 2025-04-13 01:03:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:03:33.523774 | orchestrator | 2025-04-13 01:03:33 | INFO  | Task c5e27506-a1fe-4f4a-a185-a8e9a737cb22 is in state SUCCESS 2025-04-13 01:03:33.524662 | orchestrator | 2025-04-13 01:03:33.524699 | orchestrator | 2025-04-13 01:03:33.524712 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 01:03:33.524725 | orchestrator | 2025-04-13 01:03:33.524738 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 01:03:33.524751 | orchestrator | Sunday 13 April 2025 01:02:22 +0000 (0:00:00.606) 0:00:00.606 ********** 2025-04-13 01:03:33.524764 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:03:33.524779 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:03:33.524791 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:03:33.524803 | orchestrator | 2025-04-13 01:03:33.524816 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 01:03:33.524829 | orchestrator | Sunday 13 April 2025 01:02:23 +0000 (0:00:00.555) 0:00:01.162 ********** 2025-04-13 01:03:33.524841 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-04-13 01:03:33.524854 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-04-13 01:03:33.524867 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-04-13 01:03:33.524879 | orchestrator | 2025-04-13 01:03:33.524891 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-04-13 01:03:33.524903 | orchestrator | 2025-04-13 01:03:33.524916 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-04-13 01:03:33.524928 | orchestrator | Sunday 13 April 2025 01:02:23 +0000 (0:00:00.270) 0:00:01.432 ********** 2025-04-13 01:03:33.524940 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:03:33.524954 | orchestrator | 2025-04-13 01:03:33.524966 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-04-13 01:03:33.524978 | orchestrator | Sunday 13 April 2025 01:02:23 +0000 (0:00:00.588) 0:00:02.021 ********** 2025-04-13 01:03:33.524990 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-04-13 01:03:33.525002 | orchestrator | 2025-04-13 01:03:33.525015 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-04-13 01:03:33.525027 | orchestrator | Sunday 13 April 2025 01:02:27 +0000 (0:00:03.324) 0:00:05.346 ********** 2025-04-13 01:03:33.525039 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-04-13 01:03:33.525052 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-04-13 01:03:33.525064 | orchestrator | 2025-04-13 01:03:33.525076 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-04-13 01:03:33.525088 | orchestrator | Sunday 13 April 2025 01:02:33 +0000 (0:00:06.437) 0:00:11.783 ********** 2025-04-13 01:03:33.525101 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-13 01:03:33.525151 | orchestrator | 2025-04-13 01:03:33.525164 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-04-13 01:03:33.525200 | orchestrator | Sunday 13 April 2025 01:02:37 +0000 (0:00:03.552) 0:00:15.335 ********** 2025-04-13 01:03:33.525213 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-13 01:03:33.525225 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-04-13 01:03:33.525237 | orchestrator | 2025-04-13 01:03:33.525249 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-04-13 01:03:33.525261 | orchestrator | Sunday 13 April 2025 01:02:41 +0000 (0:00:03.848) 0:00:19.184 ********** 2025-04-13 01:03:33.525274 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-13 01:03:33.525286 | orchestrator | 2025-04-13 01:03:33.525298 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-04-13 01:03:33.525313 | orchestrator | Sunday 13 April 2025 01:02:44 +0000 (0:00:03.204) 0:00:22.389 ********** 2025-04-13 01:03:33.525326 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-04-13 01:03:33.525340 | orchestrator | 2025-04-13 01:03:33.525354 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-04-13 01:03:33.525368 | orchestrator | Sunday 13 April 2025 01:02:48 +0000 (0:00:04.097) 0:00:26.486 ********** 2025-04-13 01:03:33.525382 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:03:33.525396 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:03:33.525410 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:03:33.525423 | orchestrator | 2025-04-13 01:03:33.525437 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-04-13 01:03:33.525451 | orchestrator | Sunday 13 April 2025 01:02:48 +0000 (0:00:00.492) 0:00:26.978 ********** 2025-04-13 01:03:33.525467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-13 01:03:33.525584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-13 01:03:33.525608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-13 01:03:33.525634 | orchestrator | 2025-04-13 01:03:33.525648 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-04-13 01:03:33.525663 | orchestrator | Sunday 13 April 2025 01:02:50 +0000 (0:00:02.020) 0:00:28.999 ********** 2025-04-13 01:03:33.525675 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:03:33.525688 | orchestrator | 2025-04-13 01:03:33.525700 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-04-13 01:03:33.525713 | orchestrator | Sunday 13 April 2025 01:02:51 +0000 (0:00:00.228) 0:00:29.227 ********** 2025-04-13 01:03:33.525725 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:03:33.525737 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:03:33.525749 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:03:33.525762 | orchestrator | 2025-04-13 01:03:33.525774 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-04-13 01:03:33.525786 | orchestrator | Sunday 13 April 2025 01:02:51 +0000 (0:00:00.623) 0:00:29.851 ********** 2025-04-13 01:03:33.525799 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:03:33.525811 | orchestrator | 2025-04-13 01:03:33.525824 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-04-13 01:03:33.525836 | orchestrator | Sunday 13 April 2025 01:02:52 +0000 (0:00:00.575) 0:00:30.427 ********** 2025-04-13 01:03:33.525849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-13 01:03:33.525887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-13 01:03:33.525902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-13 01:03:33.525922 | orchestrator | 2025-04-13 01:03:33.525934 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-04-13 01:03:33.525947 | orchestrator | Sunday 13 April 2025 01:02:54 +0000 (0:00:01.902) 0:00:32.330 ********** 2025-04-13 01:03:33.525960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-13 01:03:33.525973 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:03:33.525986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-13 01:03:33.525998 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:03:33.526164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-13 01:03:33.526186 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:03:33.526199 | orchestrator | 2025-04-13 01:03:33.526212 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-04-13 01:03:33.526224 | orchestrator | Sunday 13 April 2025 01:02:54 +0000 (0:00:00.500) 0:00:32.830 ********** 2025-04-13 01:03:33.526245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-13 01:03:33.526259 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:03:33.526272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-13 01:03:33.526284 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:03:33.526297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-13 01:03:33.526310 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:03:33.526323 | orchestrator | 2025-04-13 01:03:33.526335 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-04-13 01:03:33.526347 | orchestrator | Sunday 13 April 2025 01:02:56 +0000 (0:00:01.351) 0:00:34.182 ********** 2025-04-13 01:03:33.526369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-13 01:03:33.526388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-13 01:03:33.526401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-13 01:03:33.526414 | orchestrator | 2025-04-13 01:03:33.526426 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-04-13 01:03:33.526439 | orchestrator | Sunday 13 April 2025 01:02:58 +0000 (0:00:02.460) 0:00:36.642 ********** 2025-04-13 01:03:33.526451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-13 01:03:33.526465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-13 01:03:33.526491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-13 01:03:33.526504 | orchestrator | 2025-04-13 01:03:33.526517 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-04-13 01:03:33.526529 | orchestrator | Sunday 13 April 2025 01:03:03 +0000 (0:00:05.153) 0:00:41.796 ********** 2025-04-13 01:03:33.526542 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-04-13 01:03:33.526554 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-04-13 01:03:33.526567 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-04-13 01:03:33.526579 | orchestrator | 2025-04-13 01:03:33.526591 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-04-13 01:03:33.526603 | orchestrator | Sunday 13 April 2025 01:03:05 +0000 (0:00:01.897) 0:00:43.694 ********** 2025-04-13 01:03:33.526615 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:03:33.526627 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:03:33.526640 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:03:33.526652 | orchestrator | 2025-04-13 01:03:33.526664 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-04-13 01:03:33.526676 | orchestrator | Sunday 13 April 2025 01:03:07 +0000 (0:00:01.560) 0:00:45.254 ********** 2025-04-13 01:03:33.526688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-13 01:03:33.526701 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:03:33.526714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-13 01:03:33.526732 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:03:33.526797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-13 01:03:33.526812 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:03:33.526824 | orchestrator | 2025-04-13 01:03:33.526836 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-04-13 01:03:33.526849 | orchestrator | Sunday 13 April 2025 01:03:07 +0000 (0:00:00.736) 0:00:45.991 ********** 2025-04-13 01:03:33.526861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-13 01:03:33.526874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-13 01:03:33.526887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-13 01:03:33.526906 | orchestrator | 2025-04-13 01:03:33.526918 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-04-13 01:03:33.526931 | orchestrator | Sunday 13 April 2025 01:03:09 +0000 (0:00:01.263) 0:00:47.254 ********** 2025-04-13 01:03:33.526943 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:03:33.526955 | orchestrator | 2025-04-13 01:03:33.526980 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-04-13 01:03:33.526993 | orchestrator | Sunday 13 April 2025 01:03:11 +0000 (0:00:02.719) 0:00:49.973 ********** 2025-04-13 01:03:33.527016 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:03:33.527029 | orchestrator | 2025-04-13 01:03:33.527041 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-04-13 01:03:33.527054 | orchestrator | Sunday 13 April 2025 01:03:14 +0000 (0:00:02.331) 0:00:52.305 ********** 2025-04-13 01:03:33.527071 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:03:33.529507 | orchestrator | 2025-04-13 01:03:33.529528 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-04-13 01:03:33.529539 | orchestrator | Sunday 13 April 2025 01:03:26 +0000 (0:00:12.526) 0:01:04.831 ********** 2025-04-13 01:03:33.529549 | orchestrator | 2025-04-13 01:03:33.529560 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-04-13 01:03:33.529570 | orchestrator | Sunday 13 April 2025 01:03:26 +0000 (0:00:00.057) 0:01:04.888 ********** 2025-04-13 01:03:33.529580 | orchestrator | 2025-04-13 01:03:33.529590 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-04-13 01:03:33.529600 | orchestrator | Sunday 13 April 2025 01:03:27 +0000 (0:00:00.184) 0:01:05.073 ********** 2025-04-13 01:03:33.529610 | orchestrator | 2025-04-13 01:03:33.529620 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-04-13 01:03:33.529630 | orchestrator | Sunday 13 April 2025 01:03:27 +0000 (0:00:00.061) 0:01:05.134 ********** 2025-04-13 01:03:33.529640 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:03:33.529650 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:03:33.529660 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:03:33.529670 | orchestrator | 2025-04-13 01:03:33.529681 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 01:03:33.529694 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-13 01:03:33.529706 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-13 01:03:33.529718 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-13 01:03:33.529729 | orchestrator | 2025-04-13 01:03:33.529740 | orchestrator | 2025-04-13 01:03:33.529751 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 01:03:33.529770 | orchestrator | Sunday 13 April 2025 01:03:32 +0000 (0:00:05.476) 0:01:10.611 ********** 2025-04-13 01:03:33.529781 | orchestrator | =============================================================================== 2025-04-13 01:03:33.529792 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.53s 2025-04-13 01:03:33.529804 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.44s 2025-04-13 01:03:33.529815 | orchestrator | placement : Restart placement-api container ----------------------------- 5.48s 2025-04-13 01:03:33.529826 | orchestrator | placement : Copying over placement.conf --------------------------------- 5.15s 2025-04-13 01:03:33.529837 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.10s 2025-04-13 01:03:33.529856 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.85s 2025-04-13 01:03:33.529867 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.55s 2025-04-13 01:03:33.529878 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.33s 2025-04-13 01:03:33.529889 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.20s 2025-04-13 01:03:33.529900 | orchestrator | placement : Creating placement databases -------------------------------- 2.72s 2025-04-13 01:03:33.529911 | orchestrator | placement : Copying over config.json files for services ----------------- 2.46s 2025-04-13 01:03:33.529922 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.33s 2025-04-13 01:03:33.529933 | orchestrator | placement : Ensuring config directories exist --------------------------- 2.02s 2025-04-13 01:03:33.529944 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.90s 2025-04-13 01:03:33.529955 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.90s 2025-04-13 01:03:33.529966 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.56s 2025-04-13 01:03:33.529976 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.35s 2025-04-13 01:03:33.529987 | orchestrator | placement : Check placement containers ---------------------------------- 1.26s 2025-04-13 01:03:33.529998 | orchestrator | placement : Copying over existing policy file --------------------------- 0.74s 2025-04-13 01:03:33.530009 | orchestrator | placement : Set placement policy file ----------------------------------- 0.62s 2025-04-13 01:03:33.530055 | orchestrator | 2025-04-13 01:03:33 | INFO  | Task c576580c-c4aa-4040-bb91-9752c3d332c6 is in state STARTED 2025-04-13 01:03:33.530069 | orchestrator | 2025-04-13 01:03:33 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:03:33.530085 | orchestrator | 2025-04-13 01:03:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:03:33.530101 | orchestrator | 2025-04-13 01:03:33 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:03:36.584163 | orchestrator | 2025-04-13 01:03:33 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:03:36.584289 | orchestrator | 2025-04-13 01:03:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:03:36.584329 | orchestrator | 2025-04-13 01:03:36 | INFO  | Task c576580c-c4aa-4040-bb91-9752c3d332c6 is in state SUCCESS 2025-04-13 01:03:36.585924 | orchestrator | 2025-04-13 01:03:36 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:03:36.587803 | orchestrator | 2025-04-13 01:03:36 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:03:36.589880 | orchestrator | 2025-04-13 01:03:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:03:36.590940 | orchestrator | 2025-04-13 01:03:36 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:03:36.592664 | orchestrator | 2025-04-13 01:03:36 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:03:39.642573 | orchestrator | 2025-04-13 01:03:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:03:39.642712 | orchestrator | 2025-04-13 01:03:39 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:03:39.643807 | orchestrator | 2025-04-13 01:03:39 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:03:39.646370 | orchestrator | 2025-04-13 01:03:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:03:39.647962 | orchestrator | 2025-04-13 01:03:39 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state STARTED 2025-04-13 01:03:39.650259 | orchestrator | 2025-04-13 01:03:39 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:03:42.697411 | orchestrator | 2025-04-13 01:03:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:03:42.697559 | orchestrator | 2025-04-13 01:03:42 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:03:42.699097 | orchestrator | 2025-04-13 01:03:42 | INFO  | Task 8c093b94-32da-4aa4-b745-d5a489243f0e is in state STARTED 2025-04-13 01:03:42.700169 | orchestrator | 2025-04-13 01:03:42 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:03:42.701809 | orchestrator | 2025-04-13 01:03:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:03:42.704756 | orchestrator | 2025-04-13 01:03:42 | INFO  | Task 3bc5dd78-b5e9-462e-8fb3-fbc156ba0aba is in state SUCCESS 2025-04-13 01:03:42.706491 | orchestrator | 2025-04-13 01:03:42.706538 | orchestrator | None 2025-04-13 01:03:42.706554 | orchestrator | 2025-04-13 01:03:42.706568 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 01:03:42.706583 | orchestrator | 2025-04-13 01:03:42.706713 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 01:03:42.707162 | orchestrator | Sunday 13 April 2025 01:00:17 +0000 (0:00:00.427) 0:00:00.427 ********** 2025-04-13 01:03:42.707178 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:03:42.707194 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:03:42.707208 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:03:42.707222 | orchestrator | 2025-04-13 01:03:42.707236 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 01:03:42.707251 | orchestrator | Sunday 13 April 2025 01:00:17 +0000 (0:00:00.501) 0:00:00.928 ********** 2025-04-13 01:03:42.707265 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-04-13 01:03:42.707279 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-04-13 01:03:42.707293 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-04-13 01:03:42.707306 | orchestrator | 2025-04-13 01:03:42.707320 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-04-13 01:03:42.707334 | orchestrator | 2025-04-13 01:03:42.707347 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-04-13 01:03:42.708315 | orchestrator | Sunday 13 April 2025 01:00:18 +0000 (0:00:00.331) 0:00:01.259 ********** 2025-04-13 01:03:42.708334 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:03:42.708350 | orchestrator | 2025-04-13 01:03:42.708364 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-04-13 01:03:42.708379 | orchestrator | Sunday 13 April 2025 01:00:18 +0000 (0:00:00.586) 0:00:01.845 ********** 2025-04-13 01:03:42.708392 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-04-13 01:03:42.708433 | orchestrator | 2025-04-13 01:03:42.708447 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-04-13 01:03:42.708461 | orchestrator | Sunday 13 April 2025 01:00:22 +0000 (0:00:03.611) 0:00:05.457 ********** 2025-04-13 01:03:42.708475 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-04-13 01:03:42.708490 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-04-13 01:03:42.708504 | orchestrator | 2025-04-13 01:03:42.708518 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-04-13 01:03:42.708761 | orchestrator | Sunday 13 April 2025 01:00:28 +0000 (0:00:06.357) 0:00:11.814 ********** 2025-04-13 01:03:42.708776 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating projects (5 retries left). 2025-04-13 01:03:42.708791 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-13 01:03:42.708828 | orchestrator | 2025-04-13 01:03:42.708842 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-04-13 01:03:42.708856 | orchestrator | Sunday 13 April 2025 01:00:45 +0000 (0:00:16.497) 0:00:28.311 ********** 2025-04-13 01:03:42.708870 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-13 01:03:42.708884 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-04-13 01:03:42.708899 | orchestrator | 2025-04-13 01:03:42.708912 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-04-13 01:03:42.708926 | orchestrator | Sunday 13 April 2025 01:00:49 +0000 (0:00:03.807) 0:00:32.118 ********** 2025-04-13 01:03:42.708940 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-13 01:03:42.708954 | orchestrator | 2025-04-13 01:03:42.708968 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-04-13 01:03:42.708982 | orchestrator | Sunday 13 April 2025 01:00:52 +0000 (0:00:03.219) 0:00:35.338 ********** 2025-04-13 01:03:42.708996 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-04-13 01:03:42.709009 | orchestrator | 2025-04-13 01:03:42.709023 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-04-13 01:03:42.709036 | orchestrator | Sunday 13 April 2025 01:00:56 +0000 (0:00:04.390) 0:00:39.729 ********** 2025-04-13 01:03:42.709052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-13 01:03:42.709152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-13 01:03:42.709174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-13 01:03:42.709198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-13 01:03:42.709215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-13 01:03:42.709230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-13 01:03:42.709245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.709293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.709310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.709325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.709347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.709362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.709377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.709424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.709442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.709460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.709484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.709500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.709516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.709534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.709579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.709597 | orchestrator | 2025-04-13 01:03:42.709613 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-04-13 01:03:42.709629 | orchestrator | Sunday 13 April 2025 01:01:01 +0000 (0:00:04.496) 0:00:44.226 ********** 2025-04-13 01:03:42.709645 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:03:42.709661 | orchestrator | 2025-04-13 01:03:42.709677 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-04-13 01:03:42.709692 | orchestrator | Sunday 13 April 2025 01:01:01 +0000 (0:00:00.256) 0:00:44.483 ********** 2025-04-13 01:03:42.709708 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:03:42.709725 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:03:42.709747 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:03:42.709761 | orchestrator | 2025-04-13 01:03:42.709775 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-04-13 01:03:42.709789 | orchestrator | Sunday 13 April 2025 01:01:02 +0000 (0:00:01.048) 0:00:45.531 ********** 2025-04-13 01:03:42.709803 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:03:42.709817 | orchestrator | 2025-04-13 01:03:42.709830 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-04-13 01:03:42.709844 | orchestrator | Sunday 13 April 2025 01:01:03 +0000 (0:00:01.193) 0:00:46.725 ********** 2025-04-13 01:03:42.709858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-13 01:03:42.709873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-13 01:03:42.709888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-13 01:03:42.709931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-13 01:03:42.709955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-13 01:03:42.709970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-13 01:03:42.709984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.709999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.710014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.710065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.710155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.710202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.710218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.710233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.710248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.710263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.710311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.710335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.710350 | orchestrator | 2025-04-13 01:03:42.710364 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-04-13 01:03:42.710379 | orchestrator | Sunday 13 April 2025 01:01:10 +0000 (0:00:06.430) 0:00:53.156 ********** 2025-04-13 01:03:42.710393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-13 01:03:42.710407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-13 01:03:42.710422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.710437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.710480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.710503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-13 01:03:42.710518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-13 01:03:42.710532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.710547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.710562 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:03:42.710576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.710590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.710639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.710655 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:03:42.710670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-13 01:03:42.710685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-13 01:03:42.710699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.710714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.710735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.710777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.710793 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:03:42.710807 | orchestrator | 2025-04-13 01:03:42.710821 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-04-13 01:03:42.710835 | orchestrator | Sunday 13 April 2025 01:01:13 +0000 (0:00:03.301) 0:00:56.457 ********** 2025-04-13 01:03:42.710849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-13 01:03:42.710864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-13 01:03:42.710879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.710893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.710914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.710957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.710972 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:03:42.710987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-13 01:03:42.711002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-13 01:03:42.711017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.711032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.711053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.711096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.711140 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:03:42.711156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-13 01:03:42.711171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-13 01:03:42.711185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.711212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.711227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.711273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.711290 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:03:42.711305 | orchestrator | 2025-04-13 01:03:42.711319 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-04-13 01:03:42.711333 | orchestrator | Sunday 13 April 2025 01:01:15 +0000 (0:00:02.581) 0:00:59.039 ********** 2025-04-13 01:03:42.711348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-13 01:03:42.711363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-13 01:03:42.711377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-13 01:03:42.711400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-13 01:03:42.711444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-13 01:03:42.711460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-13 01:03:42.711475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.711499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.711521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.711536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.711580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.711596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.711611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.711626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.711640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.711661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.711676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.711718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.711734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.711749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.711763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.711784 | orchestrator | 2025-04-13 01:03:42.711798 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-04-13 01:03:42.711812 | orchestrator | Sunday 13 April 2025 01:01:24 +0000 (0:00:08.435) 0:01:07.474 ********** 2025-04-13 01:03:42.711827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-13 01:03:42.711841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-13 01:03:42.711885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-13 01:03:42.711901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-13 01:03:42.711916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-13 01:03:42.711937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-13 01:03:42.711951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.711966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.712008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.712024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.712039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.712060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.712075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.712089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.712128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.712176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.712193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.712207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.712229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.712243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.712258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.712272 | orchestrator | 2025-04-13 01:03:42.712286 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-04-13 01:03:42.712300 | orchestrator | Sunday 13 April 2025 01:01:47 +0000 (0:00:23.465) 0:01:30.939 ********** 2025-04-13 01:03:42.712314 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-04-13 01:03:42.712329 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-04-13 01:03:42.712343 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-04-13 01:03:42.712357 | orchestrator | 2025-04-13 01:03:42.712370 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-04-13 01:03:42.712390 | orchestrator | Sunday 13 April 2025 01:01:53 +0000 (0:00:06.087) 0:01:37.027 ********** 2025-04-13 01:03:42.712431 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-04-13 01:03:42.712447 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-04-13 01:03:42.712461 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-04-13 01:03:42.712475 | orchestrator | 2025-04-13 01:03:42.712489 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-04-13 01:03:42.712503 | orchestrator | Sunday 13 April 2025 01:01:57 +0000 (0:00:03.908) 0:01:40.935 ********** 2025-04-13 01:03:42.712517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-13 01:03:42.712538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-13 01:03:42.712553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-13 01:03:42.712568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-13 01:03:42.712590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.712606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.712626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.712641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-13 01:03:42.712655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.712670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.712685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.712708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-13 01:03:42.712729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.712744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.712759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.712774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.712788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.712808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.712823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.712843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.712858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.712872 | orchestrator | 2025-04-13 01:03:42.712886 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-04-13 01:03:42.712900 | orchestrator | Sunday 13 April 2025 01:02:01 +0000 (0:00:03.378) 0:01:44.314 ********** 2025-04-13 01:03:42.712914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-13 01:03:42.712930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-13 01:03:42.712950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-13 01:03:42.712971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-13 01:03:42.712986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-13 01:03:42.713051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-13 01:03:42.713291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.713368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.713394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.713420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713432 | orchestrator | 2025-04-13 01:03:42.713445 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-04-13 01:03:42.713463 | orchestrator | Sunday 13 April 2025 01:02:04 +0000 (0:00:03.665) 0:01:47.980 ********** 2025-04-13 01:03:42.713476 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:03:42.713499 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:03:42.713513 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:03:42.713525 | orchestrator | 2025-04-13 01:03:42.713538 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-04-13 01:03:42.713550 | orchestrator | Sunday 13 April 2025 01:02:06 +0000 (0:00:01.153) 0:01:49.134 ********** 2025-04-13 01:03:42.713571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-13 01:03:42.713586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-13 01:03:42.713599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713708 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:03:42.713722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-13 01:03:42.713734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-13 01:03:42.713748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713819 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:03:42.713830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-13 01:03:42.713840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-13 01:03:42.713851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.713921 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:03:42.713932 | orchestrator | 2025-04-13 01:03:42.713942 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-04-13 01:03:42.713952 | orchestrator | Sunday 13 April 2025 01:02:07 +0000 (0:00:01.613) 0:01:50.747 ********** 2025-04-13 01:03:42.713962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-13 01:03:42.713973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-13 01:03:42.713996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-13 01:03:42.714012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-13 01:03:42.714070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-13 01:03:42.714081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-13 01:03:42.714092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.714145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.714158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.714175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.714186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.714196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.714207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.714221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.714243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.714254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.714296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.714308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.714319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.714330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-13 01:03:42.714355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-13 01:03:42.714366 | orchestrator | 2025-04-13 01:03:42.714377 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-04-13 01:03:42.714387 | orchestrator | Sunday 13 April 2025 01:02:12 +0000 (0:00:05.058) 0:01:55.805 ********** 2025-04-13 01:03:42.714397 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:03:42.714408 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:03:42.714418 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:03:42.714428 | orchestrator | 2025-04-13 01:03:42.714438 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-04-13 01:03:42.714448 | orchestrator | Sunday 13 April 2025 01:02:13 +0000 (0:00:00.743) 0:01:56.548 ********** 2025-04-13 01:03:42.714459 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-04-13 01:03:42.714469 | orchestrator | 2025-04-13 01:03:42.714479 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-04-13 01:03:42.714490 | orchestrator | Sunday 13 April 2025 01:02:15 +0000 (0:00:02.140) 0:01:58.689 ********** 2025-04-13 01:03:42.714500 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-13 01:03:42.714510 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-04-13 01:03:42.714520 | orchestrator | 2025-04-13 01:03:42.714530 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-04-13 01:03:42.714540 | orchestrator | Sunday 13 April 2025 01:02:17 +0000 (0:00:02.223) 0:02:00.913 ********** 2025-04-13 01:03:42.714550 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:03:42.714560 | orchestrator | 2025-04-13 01:03:42.714570 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-04-13 01:03:42.714581 | orchestrator | Sunday 13 April 2025 01:02:33 +0000 (0:00:15.484) 0:02:16.397 ********** 2025-04-13 01:03:42.714590 | orchestrator | 2025-04-13 01:03:42.714601 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-04-13 01:03:42.714611 | orchestrator | Sunday 13 April 2025 01:02:33 +0000 (0:00:00.105) 0:02:16.503 ********** 2025-04-13 01:03:42.714621 | orchestrator | 2025-04-13 01:03:42.714636 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-04-13 01:03:42.714646 | orchestrator | Sunday 13 April 2025 01:02:33 +0000 (0:00:00.055) 0:02:16.559 ********** 2025-04-13 01:03:42.714656 | orchestrator | 2025-04-13 01:03:42.714666 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-04-13 01:03:42.714676 | orchestrator | Sunday 13 April 2025 01:02:33 +0000 (0:00:00.066) 0:02:16.626 ********** 2025-04-13 01:03:42.714686 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:03:42.714697 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:03:42.714707 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:03:42.714717 | orchestrator | 2025-04-13 01:03:42.714727 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-04-13 01:03:42.714737 | orchestrator | Sunday 13 April 2025 01:02:47 +0000 (0:00:13.660) 0:02:30.286 ********** 2025-04-13 01:03:42.714747 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:03:42.714757 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:03:42.714767 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:03:42.714777 | orchestrator | 2025-04-13 01:03:42.714809 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-04-13 01:03:42.714820 | orchestrator | Sunday 13 April 2025 01:02:58 +0000 (0:00:11.013) 0:02:41.300 ********** 2025-04-13 01:03:42.714829 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:03:42.714839 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:03:42.714849 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:03:42.714859 | orchestrator | 2025-04-13 01:03:42.714869 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-04-13 01:03:42.714898 | orchestrator | Sunday 13 April 2025 01:03:09 +0000 (0:00:11.528) 0:02:52.829 ********** 2025-04-13 01:03:42.714909 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:03:42.714920 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:03:42.714930 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:03:42.714940 | orchestrator | 2025-04-13 01:03:42.714950 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-04-13 01:03:42.714960 | orchestrator | Sunday 13 April 2025 01:03:18 +0000 (0:00:09.237) 0:03:02.066 ********** 2025-04-13 01:03:42.714970 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:03:42.714980 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:03:42.714990 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:03:42.715000 | orchestrator | 2025-04-13 01:03:42.715011 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-04-13 01:03:42.715021 | orchestrator | Sunday 13 April 2025 01:03:29 +0000 (0:00:10.521) 0:03:12.587 ********** 2025-04-13 01:03:42.715031 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:03:42.715040 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:03:42.715050 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:03:42.715060 | orchestrator | 2025-04-13 01:03:42.715070 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-04-13 01:03:42.715080 | orchestrator | Sunday 13 April 2025 01:03:35 +0000 (0:00:06.241) 0:03:18.829 ********** 2025-04-13 01:03:42.715090 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:03:42.715100 | orchestrator | 2025-04-13 01:03:42.715127 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 01:03:42.715138 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-13 01:03:42.715149 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-13 01:03:42.715160 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-13 01:03:42.715170 | orchestrator | 2025-04-13 01:03:42.715180 | orchestrator | 2025-04-13 01:03:42.715190 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 01:03:42.715200 | orchestrator | Sunday 13 April 2025 01:03:40 +0000 (0:00:05.125) 0:03:23.954 ********** 2025-04-13 01:03:42.715210 | orchestrator | =============================================================================== 2025-04-13 01:03:42.715220 | orchestrator | designate : Copying over designate.conf -------------------------------- 23.47s 2025-04-13 01:03:42.715230 | orchestrator | service-ks-register : designate | Creating projects -------------------- 16.50s 2025-04-13 01:03:42.715240 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.48s 2025-04-13 01:03:42.715249 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.66s 2025-04-13 01:03:42.715259 | orchestrator | designate : Restart designate-central container ------------------------ 11.53s 2025-04-13 01:03:42.715269 | orchestrator | designate : Restart designate-api container ---------------------------- 11.01s 2025-04-13 01:03:42.715279 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.52s 2025-04-13 01:03:42.715289 | orchestrator | designate : Restart designate-producer container ------------------------ 9.24s 2025-04-13 01:03:42.715299 | orchestrator | designate : Copying over config.json files for services ----------------- 8.44s 2025-04-13 01:03:42.715332 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.43s 2025-04-13 01:03:42.715343 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.36s 2025-04-13 01:03:42.715353 | orchestrator | designate : Restart designate-worker container -------------------------- 6.24s 2025-04-13 01:03:42.715363 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.09s 2025-04-13 01:03:42.715373 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 5.13s 2025-04-13 01:03:42.715383 | orchestrator | designate : Check designate containers ---------------------------------- 5.06s 2025-04-13 01:03:42.715393 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.50s 2025-04-13 01:03:42.715408 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.39s 2025-04-13 01:03:45.766644 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.91s 2025-04-13 01:03:45.766768 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.81s 2025-04-13 01:03:45.766788 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.67s 2025-04-13 01:03:45.766806 | orchestrator | 2025-04-13 01:03:42 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:03:45.766821 | orchestrator | 2025-04-13 01:03:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:03:45.766855 | orchestrator | 2025-04-13 01:03:45 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:03:45.768041 | orchestrator | 2025-04-13 01:03:45 | INFO  | Task 8c093b94-32da-4aa4-b745-d5a489243f0e is in state STARTED 2025-04-13 01:03:45.771693 | orchestrator | 2025-04-13 01:03:45 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:03:45.772451 | orchestrator | 2025-04-13 01:03:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:03:45.774795 | orchestrator | 2025-04-13 01:03:45 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:03:48.829662 | orchestrator | 2025-04-13 01:03:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:03:48.829812 | orchestrator | 2025-04-13 01:03:48 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:03:48.832530 | orchestrator | 2025-04-13 01:03:48 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:03:48.837531 | orchestrator | 2025-04-13 01:03:48 | INFO  | Task 8c093b94-32da-4aa4-b745-d5a489243f0e is in state SUCCESS 2025-04-13 01:03:48.839320 | orchestrator | 2025-04-13 01:03:48 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:03:48.839370 | orchestrator | 2025-04-13 01:03:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:03:48.840501 | orchestrator | 2025-04-13 01:03:48 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:03:48.840736 | orchestrator | 2025-04-13 01:03:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:03:51.883489 | orchestrator | 2025-04-13 01:03:51 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:03:51.884008 | orchestrator | 2025-04-13 01:03:51 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:03:51.884546 | orchestrator | 2025-04-13 01:03:51 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:03:51.885812 | orchestrator | 2025-04-13 01:03:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:03:51.886673 | orchestrator | 2025-04-13 01:03:51 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:03:54.919542 | orchestrator | 2025-04-13 01:03:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:03:54.919800 | orchestrator | 2025-04-13 01:03:54 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:03:54.920468 | orchestrator | 2025-04-13 01:03:54 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:03:54.920524 | orchestrator | 2025-04-13 01:03:54 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:03:54.921446 | orchestrator | 2025-04-13 01:03:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:03:54.921720 | orchestrator | 2025-04-13 01:03:54 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:03:54.921832 | orchestrator | 2025-04-13 01:03:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:03:57.964646 | orchestrator | 2025-04-13 01:03:57 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:03:57.965688 | orchestrator | 2025-04-13 01:03:57 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:03:57.967578 | orchestrator | 2025-04-13 01:03:57 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:03:57.969351 | orchestrator | 2025-04-13 01:03:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:03:57.970859 | orchestrator | 2025-04-13 01:03:57 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:04:01.014057 | orchestrator | 2025-04-13 01:03:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:04:01.014244 | orchestrator | 2025-04-13 01:04:01 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:04:01.016355 | orchestrator | 2025-04-13 01:04:01 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:04:01.018406 | orchestrator | 2025-04-13 01:04:01 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:04:01.021287 | orchestrator | 2025-04-13 01:04:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:04:01.022748 | orchestrator | 2025-04-13 01:04:01 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:04:01.023308 | orchestrator | 2025-04-13 01:04:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:04:04.078753 | orchestrator | 2025-04-13 01:04:04 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:04:04.082267 | orchestrator | 2025-04-13 01:04:04 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:04:04.085796 | orchestrator | 2025-04-13 01:04:04 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:04:04.087951 | orchestrator | 2025-04-13 01:04:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:04:04.089355 | orchestrator | 2025-04-13 01:04:04 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:04:07.137626 | orchestrator | 2025-04-13 01:04:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:04:07.137814 | orchestrator | 2025-04-13 01:04:07 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:04:07.138621 | orchestrator | 2025-04-13 01:04:07 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:04:07.138660 | orchestrator | 2025-04-13 01:04:07 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:04:07.140218 | orchestrator | 2025-04-13 01:04:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:04:07.142893 | orchestrator | 2025-04-13 01:04:07 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:04:10.185926 | orchestrator | 2025-04-13 01:04:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:04:10.186149 | orchestrator | 2025-04-13 01:04:10 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:04:10.186807 | orchestrator | 2025-04-13 01:04:10 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:04:10.187397 | orchestrator | 2025-04-13 01:04:10 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:04:10.188893 | orchestrator | 2025-04-13 01:04:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:04:10.189668 | orchestrator | 2025-04-13 01:04:10 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:04:13.244741 | orchestrator | 2025-04-13 01:04:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:04:13.244927 | orchestrator | 2025-04-13 01:04:13 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:04:13.245305 | orchestrator | 2025-04-13 01:04:13 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:04:13.245346 | orchestrator | 2025-04-13 01:04:13 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:04:13.245766 | orchestrator | 2025-04-13 01:04:13 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:04:13.246351 | orchestrator | 2025-04-13 01:04:13 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:04:16.283351 | orchestrator | 2025-04-13 01:04:13 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:04:16.283487 | orchestrator | 2025-04-13 01:04:16 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:04:16.283926 | orchestrator | 2025-04-13 01:04:16 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:04:16.283961 | orchestrator | 2025-04-13 01:04:16 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:04:16.284517 | orchestrator | 2025-04-13 01:04:16 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:04:16.285165 | orchestrator | 2025-04-13 01:04:16 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:04:19.315448 | orchestrator | 2025-04-13 01:04:16 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:04:19.315596 | orchestrator | 2025-04-13 01:04:19 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:04:19.315828 | orchestrator | 2025-04-13 01:04:19 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:04:19.315864 | orchestrator | 2025-04-13 01:04:19 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:04:19.316350 | orchestrator | 2025-04-13 01:04:19 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:04:19.317160 | orchestrator | 2025-04-13 01:04:19 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:04:22.343587 | orchestrator | 2025-04-13 01:04:19 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:04:22.343728 | orchestrator | 2025-04-13 01:04:22 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:04:22.345229 | orchestrator | 2025-04-13 01:04:22 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:04:22.346339 | orchestrator | 2025-04-13 01:04:22 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:04:22.346368 | orchestrator | 2025-04-13 01:04:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:04:22.346388 | orchestrator | 2025-04-13 01:04:22 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:04:25.373864 | orchestrator | 2025-04-13 01:04:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:04:25.374088 | orchestrator | 2025-04-13 01:04:25 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:04:25.374614 | orchestrator | 2025-04-13 01:04:25 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:04:25.379032 | orchestrator | 2025-04-13 01:04:25 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:04:25.380251 | orchestrator | 2025-04-13 01:04:25 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:04:25.380773 | orchestrator | 2025-04-13 01:04:25 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:04:28.410385 | orchestrator | 2025-04-13 01:04:25 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:04:28.410503 | orchestrator | 2025-04-13 01:04:28 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:04:28.410993 | orchestrator | 2025-04-13 01:04:28 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:04:28.411201 | orchestrator | 2025-04-13 01:04:28 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:04:28.411658 | orchestrator | 2025-04-13 01:04:28 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:04:28.412236 | orchestrator | 2025-04-13 01:04:28 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:04:31.452643 | orchestrator | 2025-04-13 01:04:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:04:31.452793 | orchestrator | 2025-04-13 01:04:31 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:04:31.454143 | orchestrator | 2025-04-13 01:04:31 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:04:31.454411 | orchestrator | 2025-04-13 01:04:31 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:04:31.456655 | orchestrator | 2025-04-13 01:04:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:04:31.463020 | orchestrator | 2025-04-13 01:04:31 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:04:34.488783 | orchestrator | 2025-04-13 01:04:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:04:34.488931 | orchestrator | 2025-04-13 01:04:34 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:04:34.489203 | orchestrator | 2025-04-13 01:04:34 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:04:34.489239 | orchestrator | 2025-04-13 01:04:34 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:04:34.489629 | orchestrator | 2025-04-13 01:04:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:04:34.490083 | orchestrator | 2025-04-13 01:04:34 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:04:37.516377 | orchestrator | 2025-04-13 01:04:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:04:37.516549 | orchestrator | 2025-04-13 01:04:37 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:04:37.517151 | orchestrator | 2025-04-13 01:04:37 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:04:37.517187 | orchestrator | 2025-04-13 01:04:37 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:04:37.517582 | orchestrator | 2025-04-13 01:04:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:04:37.518218 | orchestrator | 2025-04-13 01:04:37 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:04:37.518729 | orchestrator | 2025-04-13 01:04:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:04:40.553776 | orchestrator | 2025-04-13 01:04:40 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:04:40.559287 | orchestrator | 2025-04-13 01:04:40 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:04:40.559359 | orchestrator | 2025-04-13 01:04:40 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:04:43.602869 | orchestrator | 2025-04-13 01:04:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:04:43.602992 | orchestrator | 2025-04-13 01:04:40 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:04:43.603012 | orchestrator | 2025-04-13 01:04:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:04:43.603045 | orchestrator | 2025-04-13 01:04:43 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:04:43.604436 | orchestrator | 2025-04-13 01:04:43 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:04:43.605000 | orchestrator | 2025-04-13 01:04:43 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:04:43.609197 | orchestrator | 2025-04-13 01:04:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:04:46.669063 | orchestrator | 2025-04-13 01:04:43 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:04:46.669247 | orchestrator | 2025-04-13 01:04:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:04:46.669288 | orchestrator | 2025-04-13 01:04:46 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:04:46.670923 | orchestrator | 2025-04-13 01:04:46 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:04:46.674203 | orchestrator | 2025-04-13 01:04:46 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:04:46.674741 | orchestrator | 2025-04-13 01:04:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:04:46.674779 | orchestrator | 2025-04-13 01:04:46 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:04:46.674915 | orchestrator | 2025-04-13 01:04:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:04:49.733478 | orchestrator | 2025-04-13 01:04:49 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:04:49.736465 | orchestrator | 2025-04-13 01:04:49 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:04:49.738808 | orchestrator | 2025-04-13 01:04:49 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:04:49.741686 | orchestrator | 2025-04-13 01:04:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:04:49.743980 | orchestrator | 2025-04-13 01:04:49 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:04:52.804047 | orchestrator | 2025-04-13 01:04:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:04:52.804225 | orchestrator | 2025-04-13 01:04:52 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:04:52.806482 | orchestrator | 2025-04-13 01:04:52 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:04:52.808460 | orchestrator | 2025-04-13 01:04:52 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:04:52.810928 | orchestrator | 2025-04-13 01:04:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:04:52.813894 | orchestrator | 2025-04-13 01:04:52 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state STARTED 2025-04-13 01:04:55.879633 | orchestrator | 2025-04-13 01:04:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:04:55.879774 | orchestrator | 2025-04-13 01:04:55 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:04:55.881581 | orchestrator | 2025-04-13 01:04:55 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:04:55.881617 | orchestrator | 2025-04-13 01:04:55 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:04:55.882417 | orchestrator | 2025-04-13 01:04:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:04:55.888790 | orchestrator | 2025-04-13 01:04:55 | INFO  | Task 38cd2567-f146-4b57-8742-bdc8f7e42833 is in state SUCCESS 2025-04-13 01:04:55.893186 | orchestrator | 2025-04-13 01:04:55.893286 | orchestrator | 2025-04-13 01:04:55.893306 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 01:04:55.893422 | orchestrator | 2025-04-13 01:04:55.893462 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 01:04:55.893480 | orchestrator | Sunday 13 April 2025 01:03:44 +0000 (0:00:00.230) 0:00:00.230 ********** 2025-04-13 01:04:55.893495 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:04:55.893512 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:04:55.893540 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:04:55.893555 | orchestrator | 2025-04-13 01:04:55.893569 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 01:04:55.893584 | orchestrator | Sunday 13 April 2025 01:03:44 +0000 (0:00:00.408) 0:00:00.639 ********** 2025-04-13 01:04:55.893598 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-04-13 01:04:55.893612 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-04-13 01:04:55.893626 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-04-13 01:04:55.893650 | orchestrator | 2025-04-13 01:04:55.893665 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-04-13 01:04:55.893679 | orchestrator | 2025-04-13 01:04:55.893693 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-04-13 01:04:55.893709 | orchestrator | Sunday 13 April 2025 01:03:45 +0000 (0:00:00.493) 0:00:01.133 ********** 2025-04-13 01:04:55.893742 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:04:55.893758 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:04:55.893774 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:04:55.893790 | orchestrator | 2025-04-13 01:04:55.894323 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 01:04:55.894350 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 01:04:55.894367 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 01:04:55.894483 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 01:04:55.894503 | orchestrator | 2025-04-13 01:04:55.894518 | orchestrator | 2025-04-13 01:04:55.894533 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 01:04:55.894549 | orchestrator | Sunday 13 April 2025 01:03:45 +0000 (0:00:00.785) 0:00:01.919 ********** 2025-04-13 01:04:55.894564 | orchestrator | =============================================================================== 2025-04-13 01:04:55.894579 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.79s 2025-04-13 01:04:55.894593 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2025-04-13 01:04:55.894608 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.41s 2025-04-13 01:04:55.894622 | orchestrator | 2025-04-13 01:04:55.894636 | orchestrator | 2025-04-13 01:04:55.894668 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 01:04:55.894682 | orchestrator | 2025-04-13 01:04:55.894696 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 01:04:55.894710 | orchestrator | Sunday 13 April 2025 01:00:16 +0000 (0:00:00.337) 0:00:00.337 ********** 2025-04-13 01:04:55.895156 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:04:55.895172 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:04:55.895186 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:04:55.895200 | orchestrator | ok: [testbed-node-3] 2025-04-13 01:04:55.895214 | orchestrator | ok: [testbed-node-4] 2025-04-13 01:04:55.895227 | orchestrator | ok: [testbed-node-5] 2025-04-13 01:04:55.895241 | orchestrator | 2025-04-13 01:04:55.895302 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 01:04:55.895318 | orchestrator | Sunday 13 April 2025 01:00:17 +0000 (0:00:00.971) 0:00:01.309 ********** 2025-04-13 01:04:55.895332 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-04-13 01:04:55.895346 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-04-13 01:04:55.895360 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-04-13 01:04:55.895374 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-04-13 01:04:55.895388 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-04-13 01:04:55.895402 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-04-13 01:04:55.895648 | orchestrator | 2025-04-13 01:04:55.895664 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-04-13 01:04:55.895678 | orchestrator | 2025-04-13 01:04:55.895693 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-04-13 01:04:55.895706 | orchestrator | Sunday 13 April 2025 01:00:18 +0000 (0:00:00.689) 0:00:01.998 ********** 2025-04-13 01:04:55.895721 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 01:04:55.895737 | orchestrator | 2025-04-13 01:04:55.895752 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-04-13 01:04:55.895766 | orchestrator | Sunday 13 April 2025 01:00:19 +0000 (0:00:01.039) 0:00:03.037 ********** 2025-04-13 01:04:55.895807 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:04:55.895822 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:04:55.895836 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:04:55.896299 | orchestrator | ok: [testbed-node-3] 2025-04-13 01:04:55.896313 | orchestrator | ok: [testbed-node-4] 2025-04-13 01:04:55.896327 | orchestrator | ok: [testbed-node-5] 2025-04-13 01:04:55.896341 | orchestrator | 2025-04-13 01:04:55.896355 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-04-13 01:04:55.896369 | orchestrator | Sunday 13 April 2025 01:00:20 +0000 (0:00:01.182) 0:00:04.220 ********** 2025-04-13 01:04:55.896383 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:04:55.896397 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:04:55.896428 | orchestrator | ok: [testbed-node-3] 2025-04-13 01:04:55.896442 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:04:55.896625 | orchestrator | ok: [testbed-node-4] 2025-04-13 01:04:55.896772 | orchestrator | ok: [testbed-node-5] 2025-04-13 01:04:55.897323 | orchestrator | 2025-04-13 01:04:55.897342 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-04-13 01:04:55.897356 | orchestrator | Sunday 13 April 2025 01:00:21 +0000 (0:00:00.949) 0:00:05.170 ********** 2025-04-13 01:04:55.897370 | orchestrator | ok: [testbed-node-0] => { 2025-04-13 01:04:55.897385 | orchestrator |  "changed": false, 2025-04-13 01:04:55.897400 | orchestrator |  "msg": "All assertions passed" 2025-04-13 01:04:55.897413 | orchestrator | } 2025-04-13 01:04:55.897428 | orchestrator | ok: [testbed-node-1] => { 2025-04-13 01:04:55.897442 | orchestrator |  "changed": false, 2025-04-13 01:04:55.897456 | orchestrator |  "msg": "All assertions passed" 2025-04-13 01:04:55.897470 | orchestrator | } 2025-04-13 01:04:55.897484 | orchestrator | ok: [testbed-node-2] => { 2025-04-13 01:04:55.897498 | orchestrator |  "changed": false, 2025-04-13 01:04:55.897512 | orchestrator |  "msg": "All assertions passed" 2025-04-13 01:04:55.897526 | orchestrator | } 2025-04-13 01:04:55.898531 | orchestrator | ok: [testbed-node-3] => { 2025-04-13 01:04:55.898579 | orchestrator |  "changed": false, 2025-04-13 01:04:55.898603 | orchestrator |  "msg": "All assertions passed" 2025-04-13 01:04:55.898627 | orchestrator | } 2025-04-13 01:04:55.898650 | orchestrator | ok: [testbed-node-4] => { 2025-04-13 01:04:55.898673 | orchestrator |  "changed": false, 2025-04-13 01:04:55.898800 | orchestrator |  "msg": "All assertions passed" 2025-04-13 01:04:55.898819 | orchestrator | } 2025-04-13 01:04:55.898834 | orchestrator | ok: [testbed-node-5] => { 2025-04-13 01:04:55.898849 | orchestrator |  "changed": false, 2025-04-13 01:04:55.898864 | orchestrator |  "msg": "All assertions passed" 2025-04-13 01:04:55.898879 | orchestrator | } 2025-04-13 01:04:55.899260 | orchestrator | 2025-04-13 01:04:55.899287 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-04-13 01:04:55.899302 | orchestrator | Sunday 13 April 2025 01:00:22 +0000 (0:00:00.608) 0:00:05.779 ********** 2025-04-13 01:04:55.899316 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.899330 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.899356 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.899370 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.899383 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.899397 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.899411 | orchestrator | 2025-04-13 01:04:55.899425 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-04-13 01:04:55.899440 | orchestrator | Sunday 13 April 2025 01:00:23 +0000 (0:00:00.839) 0:00:06.618 ********** 2025-04-13 01:04:55.899454 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-04-13 01:04:55.899475 | orchestrator | 2025-04-13 01:04:55.899490 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-04-13 01:04:55.899504 | orchestrator | Sunday 13 April 2025 01:00:26 +0000 (0:00:03.307) 0:00:09.925 ********** 2025-04-13 01:04:55.899518 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-04-13 01:04:55.899533 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-04-13 01:04:55.899547 | orchestrator | 2025-04-13 01:04:55.899561 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-04-13 01:04:55.899575 | orchestrator | Sunday 13 April 2025 01:00:32 +0000 (0:00:06.347) 0:00:16.273 ********** 2025-04-13 01:04:55.899589 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-13 01:04:55.899604 | orchestrator | 2025-04-13 01:04:55.899618 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-04-13 01:04:55.899632 | orchestrator | Sunday 13 April 2025 01:00:36 +0000 (0:00:03.344) 0:00:19.617 ********** 2025-04-13 01:04:55.899646 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-13 01:04:55.899695 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-04-13 01:04:55.899711 | orchestrator | 2025-04-13 01:04:55.899725 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-04-13 01:04:55.899739 | orchestrator | Sunday 13 April 2025 01:00:40 +0000 (0:00:03.993) 0:00:23.611 ********** 2025-04-13 01:04:55.900286 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-13 01:04:55.900309 | orchestrator | 2025-04-13 01:04:55.900324 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-04-13 01:04:55.900338 | orchestrator | Sunday 13 April 2025 01:00:43 +0000 (0:00:03.199) 0:00:26.810 ********** 2025-04-13 01:04:55.900352 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-04-13 01:04:55.900367 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-04-13 01:04:55.900380 | orchestrator | 2025-04-13 01:04:55.900395 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-04-13 01:04:55.900409 | orchestrator | Sunday 13 April 2025 01:00:51 +0000 (0:00:08.487) 0:00:35.298 ********** 2025-04-13 01:04:55.900423 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.900438 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.900452 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.900466 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.900480 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.900494 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.900508 | orchestrator | 2025-04-13 01:04:55.900522 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-04-13 01:04:55.900537 | orchestrator | Sunday 13 April 2025 01:00:52 +0000 (0:00:00.755) 0:00:36.053 ********** 2025-04-13 01:04:55.900551 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.900565 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.900579 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.900593 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.900607 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.900622 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.900636 | orchestrator | 2025-04-13 01:04:55.900650 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-04-13 01:04:55.900664 | orchestrator | Sunday 13 April 2025 01:00:55 +0000 (0:00:02.445) 0:00:38.499 ********** 2025-04-13 01:04:55.900678 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:04:55.900693 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:04:55.900707 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:04:55.900721 | orchestrator | ok: [testbed-node-3] 2025-04-13 01:04:55.900735 | orchestrator | ok: [testbed-node-4] 2025-04-13 01:04:55.900841 | orchestrator | ok: [testbed-node-5] 2025-04-13 01:04:55.901706 | orchestrator | 2025-04-13 01:04:55.901748 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-04-13 01:04:55.901763 | orchestrator | Sunday 13 April 2025 01:00:56 +0000 (0:00:01.110) 0:00:39.610 ********** 2025-04-13 01:04:55.901777 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.901791 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.901805 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.901815 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.901825 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.902298 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.902373 | orchestrator | 2025-04-13 01:04:55.902392 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-04-13 01:04:55.902408 | orchestrator | Sunday 13 April 2025 01:01:00 +0000 (0:00:03.990) 0:00:43.600 ********** 2025-04-13 01:04:55.902425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 01:04:55.902476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.902493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.902511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.902771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.902803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.902832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.902849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.902866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.902882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 01:04:55.902949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 01:04:55.902968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.902990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.903005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.903031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.903059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.903075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.903098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.903158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.903175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.903190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.903229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.903246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.903269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.903298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.903318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.903336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.903469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.903504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.903538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.903557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.903573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.903590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.903607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.903684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.903724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.903741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.903756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.903772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.903786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.904915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.904988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.905027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.905043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.905058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.905073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.905186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.905216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.905232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.905247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.905262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.905277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.905319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.905337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.905352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.905366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.905390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.905421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.905436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.905451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.905465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.905483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.905523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.905541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.905559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.905575 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.905591 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.905608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.905634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.905664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.905680 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.905695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.905718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.905733 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.905753 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.905774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.905788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.905803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.905817 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-13 01:04:55.905845 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-13 01:04:55.905864 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-13 01:04:55.905883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.905897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.905910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.905923 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.905941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.905959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.905976 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.905996 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.906009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.906087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.906136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.906159 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.906180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.906194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.906207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.906220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.906232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.906261 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.906282 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.906295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.906308 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.906321 | orchestrator | 2025-04-13 01:04:55.906334 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-04-13 01:04:55.906347 | orchestrator | Sunday 13 April 2025 01:01:03 +0000 (0:00:03.735) 0:00:47.336 ********** 2025-04-13 01:04:55.906359 | orchestrator | [WARNING]: Skipped 2025-04-13 01:04:55.906372 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-04-13 01:04:55.906384 | orchestrator | due to this access issue: 2025-04-13 01:04:55.906403 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-04-13 01:04:55.906422 | orchestrator | a directory 2025-04-13 01:04:55.906434 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-13 01:04:55.906447 | orchestrator | 2025-04-13 01:04:55.906460 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-04-13 01:04:55.906472 | orchestrator | Sunday 13 April 2025 01:01:04 +0000 (0:00:00.709) 0:00:48.045 ********** 2025-04-13 01:04:55.906485 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 01:04:55.906498 | orchestrator | 2025-04-13 01:04:55.906511 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-04-13 01:04:55.906523 | orchestrator | Sunday 13 April 2025 01:01:05 +0000 (0:00:01.213) 0:00:49.258 ********** 2025-04-13 01:04:55.906536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 01:04:55.906581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 01:04:55.906596 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-13 01:04:55.906609 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-13 01:04:55.906628 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-13 01:04:55.906649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 01:04:55.906662 | orchestrator | 2025-04-13 01:04:55.906675 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-04-13 01:04:55.906687 | orchestrator | Sunday 13 April 2025 01:01:10 +0000 (0:00:04.525) 0:00:53.783 ********** 2025-04-13 01:04:55.906706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.906720 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.906733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.906751 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.906764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.906777 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.906789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.906802 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.906824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.906837 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.906857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.906870 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.906883 | orchestrator | 2025-04-13 01:04:55.906895 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-04-13 01:04:55.906915 | orchestrator | Sunday 13 April 2025 01:01:15 +0000 (0:00:05.094) 0:00:58.878 ********** 2025-04-13 01:04:55.906928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.906947 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.906959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.906972 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.906994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.907007 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.907027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.907041 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.907054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.907072 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.907084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.907097 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.907157 | orchestrator | 2025-04-13 01:04:55.907172 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-04-13 01:04:55.907184 | orchestrator | Sunday 13 April 2025 01:01:20 +0000 (0:00:05.322) 0:01:04.201 ********** 2025-04-13 01:04:55.907196 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.907210 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.907223 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.907235 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.907247 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.907259 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.907271 | orchestrator | 2025-04-13 01:04:55.907284 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-04-13 01:04:55.907296 | orchestrator | Sunday 13 April 2025 01:01:24 +0000 (0:00:03.807) 0:01:08.009 ********** 2025-04-13 01:04:55.907309 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.907321 | orchestrator | 2025-04-13 01:04:55.907333 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-04-13 01:04:55.907345 | orchestrator | Sunday 13 April 2025 01:01:24 +0000 (0:00:00.164) 0:01:08.173 ********** 2025-04-13 01:04:55.907358 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.907370 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.907382 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.907395 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.907407 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.907419 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.907431 | orchestrator | 2025-04-13 01:04:55.907443 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-04-13 01:04:55.907456 | orchestrator | Sunday 13 April 2025 01:01:25 +0000 (0:00:01.105) 0:01:09.278 ********** 2025-04-13 01:04:55.907468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.907505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.907520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.907533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.907546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.907557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.907567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.907589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.907608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.907619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.907630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.907640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.907651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.907671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.907690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.907701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.907712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.907722 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.907733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.907752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.907859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.907870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.907881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.907892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.907909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.907926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.907946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.907957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.907968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.907986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.908033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.908043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.908084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.908095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.908126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.908159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.908170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.908185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908211 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.908230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.908241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.908262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.908278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.908313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.908324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908335 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.908345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.908370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.908382 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908398 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908409 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.908435 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908472 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.908512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908528 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.908539 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.908549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.908565 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.908576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.908605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908635 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.908650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/confi2025-04-13 01:04:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:04:55.908661 | orchestrator | g_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.908672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.908707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.908718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.908745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908755 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.908766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.908776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.908792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908812 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.908823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.908838 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908849 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.908860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.908875 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.908905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.910100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.910140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.910162 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.910187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.910206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.910220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.910247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.910256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910269 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.910278 | orchestrator | 2025-04-13 01:04:55.910287 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-04-13 01:04:55.910296 | orchestrator | Sunday 13 April 2025 01:01:30 +0000 (0:00:04.726) 0:01:14.005 ********** 2025-04-13 01:04:55.910305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.910318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910356 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.910366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.910388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.910403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 01:04:55.910422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.910478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 01:04:55.910487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.910531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.910550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910573 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.910593 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.910612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.910634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.910657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910673 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.910692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910701 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.910719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.910729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 01:04:55.910738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.910747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.910790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.910799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.910861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.910871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.910900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.910910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.910919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.910948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.910973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.910982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.910991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.911006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.911019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.911028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.911037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.911052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.911062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.911089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.911099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.911123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.911140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.911150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.911163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.911177 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.911193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.911203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.911212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.911225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.911235 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.911248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.913474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.913570 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-13 01:04:55.913583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.913609 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.913620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.913649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.913676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.913693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.913708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.913732 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-13 01:04:55.913747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.913771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.913786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.913806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.913815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.913831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.913840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.913856 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-13 01:04:55.913873 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.913883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.913892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.913906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.913915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.913938 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.913947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.913958 | orchestrator | 2025-04-13 01:04:55.913974 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-04-13 01:04:55.913988 | orchestrator | Sunday 13 April 2025 01:01:35 +0000 (0:00:04.979) 0:01:18.985 ********** 2025-04-13 01:04:55.914002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.914055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.914136 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.914164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.914174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.914217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914252 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.914261 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914282 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.914308 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.914318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.914342 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914374 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.914398 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.914416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.914425 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 01:04:55.914455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 01:04:55.914494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.914541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.914580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.914589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.914654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.914672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.914686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.914722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.914732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.914741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.914803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.914818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.914827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.914855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.914867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.914910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.914919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 01:04:55.914942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.914983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.914992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.915001 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-13 01:04:55.915028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.915038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.915047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.915057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.915072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.915087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.915103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.915189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.915215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.915226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.915236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.915245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.915262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.915285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.915295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.915305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.915314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.915323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.915366 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-13 01:04:55.915377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.915386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.915395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.915404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.915421 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.915443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.915454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.915463 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-13 01:04:55.915472 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.915485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.915510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.915525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.915540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.915558 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.915567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.915576 | orchestrator | 2025-04-13 01:04:55.915586 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-04-13 01:04:55.915595 | orchestrator | Sunday 13 April 2025 01:01:43 +0000 (0:00:07.774) 0:01:26.760 ********** 2025-04-13 01:04:55.915604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.915618 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.915631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.915755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.915777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.915790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.915837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.915881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.915911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.915927 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.915942 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.915956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.915978 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.916001 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916023 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.916038 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.916053 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916067 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.916124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.916190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916227 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916242 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.916257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916315 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.916330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.916344 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.916379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.916415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.916446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.916483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.916498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916512 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.916527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 01:04:55.916559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.916625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.916684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.916699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.916724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.916775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.916791 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.916822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.916831 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.916839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.916848 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.916891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.916909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.916921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.916929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.916937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.916975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.916988 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.916996 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.917005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 01:04:55.917025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.917034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.917042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.917055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.917063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.917089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.917098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.917147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.917163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.917175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.917189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.917204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.917213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.917239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.917249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.917258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.917271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 01:04:55.917301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.917311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.917320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.917328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.917341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.917354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.917363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.917379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.917388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.917396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.917405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.917431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.917553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.917605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.917618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.917627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.917635 | orchestrator | 2025-04-13 01:04:55.917649 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-04-13 01:04:55.917662 | orchestrator | Sunday 13 April 2025 01:01:46 +0000 (0:00:03.476) 0:01:30.237 ********** 2025-04-13 01:04:55.917675 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:04:55.917690 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.917713 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.917725 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.917733 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:04:55.917741 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:04:55.917749 | orchestrator | 2025-04-13 01:04:55.917757 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-04-13 01:04:55.917765 | orchestrator | Sunday 13 April 2025 01:01:51 +0000 (0:00:05.117) 0:01:35.355 ********** 2025-04-13 01:04:55.917834 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.917850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.917885 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.917895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.917904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.917964 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.917981 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.917992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.918009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.918054 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.918063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.918090 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.918194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.918228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.919501 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.919543 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.919553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.919570 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.919653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.919665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.919689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.919700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.919712 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.919784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.919801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.919812 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.919838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.919851 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.919871 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.919923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.919934 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.919955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.919963 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.919974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.920003 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.920011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.920058 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.920068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.920078 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.920131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.920142 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.920156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.920226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.920241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.920249 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.920257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.920269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.920277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.920284 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.920329 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.920350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.920361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.920381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.920407 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.920415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 01:04:55.920461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.920472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.920509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.920519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.920532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.920539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.920547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.920593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.920612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.920625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.920637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.920645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.920663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.920729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.920746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.920759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.920779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 01:04:55.920792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.920862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.920885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.920899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.920913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.920921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.920928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.920936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.921011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.921029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.921048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.921060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.921073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.921086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.921182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.921200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.921214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 01:04:55.921222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.921229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.921292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.921309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.921327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.921339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.921351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.921363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.921428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.921442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.921465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.921477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.921489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.921501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.921585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.921600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.921616 | orchestrator | 2025-04-13 01:04:55.921624 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-04-13 01:04:55.921632 | orchestrator | Sunday 13 April 2025 01:01:57 +0000 (0:00:05.099) 0:01:40.454 ********** 2025-04-13 01:04:55.921641 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.921653 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.921664 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.921675 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.921686 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.921698 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.921722 | orchestrator | 2025-04-13 01:04:55.921734 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-04-13 01:04:55.921745 | orchestrator | Sunday 13 April 2025 01:01:59 +0000 (0:00:01.950) 0:01:42.405 ********** 2025-04-13 01:04:55.921757 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.921774 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.921785 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.921796 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.921803 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.921809 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.921816 | orchestrator | 2025-04-13 01:04:55.921823 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-04-13 01:04:55.921830 | orchestrator | Sunday 13 April 2025 01:02:01 +0000 (0:00:02.028) 0:01:44.433 ********** 2025-04-13 01:04:55.921837 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.921844 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.921851 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.921858 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.921864 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.921871 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.921878 | orchestrator | 2025-04-13 01:04:55.921885 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-04-13 01:04:55.921892 | orchestrator | Sunday 13 April 2025 01:02:04 +0000 (0:00:03.497) 0:01:47.931 ********** 2025-04-13 01:04:55.921898 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.921905 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.921912 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.921919 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.921926 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.921933 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.921939 | orchestrator | 2025-04-13 01:04:55.921946 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-04-13 01:04:55.921953 | orchestrator | Sunday 13 April 2025 01:02:07 +0000 (0:00:03.389) 0:01:51.320 ********** 2025-04-13 01:04:55.921960 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.921967 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.921973 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.921980 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.921987 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.921994 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.922001 | orchestrator | 2025-04-13 01:04:55.922008 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-04-13 01:04:55.922036 | orchestrator | Sunday 13 April 2025 01:02:10 +0000 (0:00:02.145) 0:01:53.466 ********** 2025-04-13 01:04:55.922045 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.922052 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.922082 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.922089 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.922096 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.922103 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.922156 | orchestrator | 2025-04-13 01:04:55.922169 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-04-13 01:04:55.922187 | orchestrator | Sunday 13 April 2025 01:02:13 +0000 (0:00:03.269) 0:01:56.736 ********** 2025-04-13 01:04:55.922201 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-13 01:04:55.922214 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.922227 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-13 01:04:55.922240 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.922254 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-13 01:04:55.922268 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.922282 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-13 01:04:55.922295 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.922303 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-13 01:04:55.922312 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.922320 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-13 01:04:55.922391 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.922403 | orchestrator | 2025-04-13 01:04:55.922416 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-04-13 01:04:55.922424 | orchestrator | Sunday 13 April 2025 01:02:15 +0000 (0:00:02.509) 0:01:59.245 ********** 2025-04-13 01:04:55.922434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.922443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.922452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.922498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.922554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.922568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.922580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.922588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.922600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.922631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.922643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.922704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.922714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.922722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.922754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.922762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.922768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.922822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.922833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.922844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.922882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.922894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.922952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.922969 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.922978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.922985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.922991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.923025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.923081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.923161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.923210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.923232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.923290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.923297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.923340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.923361 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.923368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.923413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.923426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.923467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.923493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.923500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923506 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.923520 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.923572 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.923607 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923622 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.923703 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.923716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923723 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.923764 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.923789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.923811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923818 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.923834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.923841 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.923908 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.923914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.923931 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923938 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.923986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.924013 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.924021 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.924027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.924034 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.924040 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.924047 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.924087 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.924194 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.924204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.924210 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.924217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.924223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.924280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.924312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.924320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.924327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.924343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.924350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.924358 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.924434 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.924449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.924471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.924483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.924493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.924502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.924567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.924580 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.924589 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.924598 | orchestrator | 2025-04-13 01:04:55.924608 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-04-13 01:04:55.924617 | orchestrator | Sunday 13 April 2025 01:02:18 +0000 (0:00:02.780) 0:02:02.026 ********** 2025-04-13 01:04:55.924654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.924666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.924683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.924762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.924778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.924788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.924797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.924819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.924856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.924920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.924936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.924945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.924956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.924966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.924983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.925045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.925055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.925085 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.925125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.925199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.925223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.925235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.925280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.925299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.925309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.925372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.925424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925435 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.925445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.925461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925532 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.925541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925560 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925571 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.925629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.925639 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925649 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.925668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.925696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.925750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.925769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.925780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.925811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.925857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.925867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.925908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925960 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.925970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.925989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.926055 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.926070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.926077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.926143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.926154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.926165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.926200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.926208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.926215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.926267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.926278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.926298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.926311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.926317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.926324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.926369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.926382 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.926389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.926400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.926409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.926419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.926428 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.926437 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.926447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.926485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.926517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.926542 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.926552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.926562 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.926592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.926600 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.926622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.926629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.926636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.926642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.926648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.926666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.926702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.926710 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.926716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.926722 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.926728 | orchestrator | 2025-04-13 01:04:55.926734 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-04-13 01:04:55.926741 | orchestrator | Sunday 13 April 2025 01:02:21 +0000 (0:00:03.049) 0:02:05.076 ********** 2025-04-13 01:04:55.926747 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.926753 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.926759 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.926764 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.926770 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.926782 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.926789 | orchestrator | 2025-04-13 01:04:55.926795 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-04-13 01:04:55.926801 | orchestrator | Sunday 13 April 2025 01:02:23 +0000 (0:00:02.037) 0:02:07.113 ********** 2025-04-13 01:04:55.926807 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.926813 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.926819 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.926825 | orchestrator | changed: [testbed-node-5] 2025-04-13 01:04:55.926831 | orchestrator | changed: [testbed-node-3] 2025-04-13 01:04:55.926836 | orchestrator | changed: [testbed-node-4] 2025-04-13 01:04:55.926842 | orchestrator | 2025-04-13 01:04:55.926848 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-04-13 01:04:55.926854 | orchestrator | Sunday 13 April 2025 01:02:29 +0000 (0:00:05.265) 0:02:12.379 ********** 2025-04-13 01:04:55.926865 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.926871 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.926876 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.926882 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.926888 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.926894 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.926899 | orchestrator | 2025-04-13 01:04:55.926905 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-04-13 01:04:55.926911 | orchestrator | Sunday 13 April 2025 01:02:31 +0000 (0:00:02.486) 0:02:14.865 ********** 2025-04-13 01:04:55.926917 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.926923 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.926928 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.926947 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.926953 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.926959 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.926965 | orchestrator | 2025-04-13 01:04:55.926971 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-04-13 01:04:55.926977 | orchestrator | Sunday 13 April 2025 01:02:34 +0000 (0:00:02.625) 0:02:17.491 ********** 2025-04-13 01:04:55.926983 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.926989 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.926995 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.927001 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.927007 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.927014 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.927021 | orchestrator | 2025-04-13 01:04:55.927027 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-04-13 01:04:55.927034 | orchestrator | Sunday 13 April 2025 01:02:37 +0000 (0:00:03.801) 0:02:21.293 ********** 2025-04-13 01:04:55.927041 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.927047 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.927054 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.927061 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.927067 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.927074 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.927084 | orchestrator | 2025-04-13 01:04:55.927094 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-04-13 01:04:55.927104 | orchestrator | Sunday 13 April 2025 01:02:39 +0000 (0:00:02.035) 0:02:23.328 ********** 2025-04-13 01:04:55.927159 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.927170 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.927181 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.927192 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.927203 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.927214 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.927224 | orchestrator | 2025-04-13 01:04:55.927231 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-04-13 01:04:55.927238 | orchestrator | Sunday 13 April 2025 01:02:42 +0000 (0:00:02.140) 0:02:25.469 ********** 2025-04-13 01:04:55.927245 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.927252 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.927258 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.927265 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.927271 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.927278 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.927284 | orchestrator | 2025-04-13 01:04:55.927291 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-04-13 01:04:55.927298 | orchestrator | Sunday 13 April 2025 01:02:45 +0000 (0:00:03.432) 0:02:28.901 ********** 2025-04-13 01:04:55.927304 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.927311 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.927317 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.927330 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.927337 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.927344 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.927350 | orchestrator | 2025-04-13 01:04:55.927357 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-04-13 01:04:55.927363 | orchestrator | Sunday 13 April 2025 01:02:49 +0000 (0:00:03.499) 0:02:32.400 ********** 2025-04-13 01:04:55.927369 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.927379 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.927384 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.927390 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.927396 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.927402 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.927407 | orchestrator | 2025-04-13 01:04:55.927413 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-04-13 01:04:55.927419 | orchestrator | Sunday 13 April 2025 01:02:52 +0000 (0:00:03.287) 0:02:35.688 ********** 2025-04-13 01:04:55.927425 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-13 01:04:55.927432 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.927438 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-13 01:04:55.927649 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.927657 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-13 01:04:55.927663 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.927671 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-13 01:04:55.927676 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.927682 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-13 01:04:55.927687 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.927693 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-13 01:04:55.927698 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.927703 | orchestrator | 2025-04-13 01:04:55.927708 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-04-13 01:04:55.927714 | orchestrator | Sunday 13 April 2025 01:02:54 +0000 (0:00:02.496) 0:02:38.185 ********** 2025-04-13 01:04:55.927754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.927762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.927780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.927786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.927792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.927809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.927815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.927822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.927838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.927844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.927850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.927855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.927873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.927880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.927892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.927898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.927904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.927909 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.927915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.927932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.927942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.927948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.927953 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.927959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.927975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.927981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.927990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.927996 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.928001 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.928012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.928028 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.928052 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.928057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928063 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.928068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.928085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.928138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.928160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.928179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.928191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.928208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.928216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.928257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.928265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928273 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.928283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.928292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.928338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.928349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.928369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.928381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.928392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.928397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.928432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.928438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928443 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.928449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.928455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.928500 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928506 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.928512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.928521 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.928543 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.928554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.928559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.928585 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.928591 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928596 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.928602 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.928607 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928639 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.928644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.928656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.928665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928679 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.928686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.928697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.928702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.928718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.928733 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928739 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.928744 | orchestrator | 2025-04-13 01:04:55.928749 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-04-13 01:04:55.928755 | orchestrator | Sunday 13 April 2025 01:02:57 +0000 (0:00:02.793) 0:02:40.978 ********** 2025-04-13 01:04:55.928760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 01:04:55.928766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 01:04:55.928775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.928817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.928845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.928850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-13 01:04:55.928864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.928878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.928889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.928909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.928924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.928938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.928949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.928957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.928963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.928969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.928991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.929013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.929022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.929028 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.929037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929048 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929056 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.929070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.929082 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.929087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.929095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.929133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.929145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.929150 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.929169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.929174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.929180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.929200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.929216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929222 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.929227 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.929256 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929262 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.929268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.929273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-13 01:04:55.929293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929299 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929304 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-13 01:04:55.929310 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-13 01:04:55.929332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929338 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.929344 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.929349 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.929355 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.929371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.929383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.929388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929394 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-13 01:04:55.929405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929410 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.929416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.929421 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.929433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.929451 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-13 01:04:55.929457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:04:55.929474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:04:55.929479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-13 01:04:55.929496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-13 01:04:55.929502 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-13 01:04:55.929507 | orchestrator | 2025-04-13 01:04:55.929513 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-04-13 01:04:55.929525 | orchestrator | Sunday 13 April 2025 01:03:03 +0000 (0:00:06.095) 0:02:47.074 ********** 2025-04-13 01:04:55.929534 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:04:55.929543 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:04:55.929552 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:04:55.929560 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:04:55.929569 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:04:55.929578 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:04:55.929588 | orchestrator | 2025-04-13 01:04:55.929597 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-04-13 01:04:55.929607 | orchestrator | Sunday 13 April 2025 01:03:04 +0000 (0:00:00.978) 0:02:48.052 ********** 2025-04-13 01:04:55.929613 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:04:55.929618 | orchestrator | 2025-04-13 01:04:55.929623 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-04-13 01:04:55.929629 | orchestrator | Sunday 13 April 2025 01:03:07 +0000 (0:00:02.576) 0:02:50.628 ********** 2025-04-13 01:04:55.929634 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:04:55.929641 | orchestrator | 2025-04-13 01:04:55.929655 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-04-13 01:04:55.929664 | orchestrator | Sunday 13 April 2025 01:03:09 +0000 (0:00:02.290) 0:02:52.919 ********** 2025-04-13 01:04:55.929673 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:04:55.929681 | orchestrator | 2025-04-13 01:04:55.929689 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-13 01:04:55.929698 | orchestrator | Sunday 13 April 2025 01:03:48 +0000 (0:00:38.945) 0:03:31.865 ********** 2025-04-13 01:04:55.929707 | orchestrator | 2025-04-13 01:04:55.929715 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-13 01:04:55.929721 | orchestrator | Sunday 13 April 2025 01:03:48 +0000 (0:00:00.059) 0:03:31.925 ********** 2025-04-13 01:04:55.929726 | orchestrator | 2025-04-13 01:04:55.929732 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-13 01:04:55.929737 | orchestrator | Sunday 13 April 2025 01:03:48 +0000 (0:00:00.239) 0:03:32.164 ********** 2025-04-13 01:04:55.929742 | orchestrator | 2025-04-13 01:04:55.929747 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-13 01:04:55.929753 | orchestrator | Sunday 13 April 2025 01:03:48 +0000 (0:00:00.056) 0:03:32.221 ********** 2025-04-13 01:04:55.929758 | orchestrator | 2025-04-13 01:04:55.929763 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-13 01:04:55.929768 | orchestrator | Sunday 13 April 2025 01:03:48 +0000 (0:00:00.058) 0:03:32.280 ********** 2025-04-13 01:04:55.929773 | orchestrator | 2025-04-13 01:04:55.929779 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-13 01:04:55.929784 | orchestrator | Sunday 13 April 2025 01:03:48 +0000 (0:00:00.053) 0:03:32.334 ********** 2025-04-13 01:04:55.929789 | orchestrator | 2025-04-13 01:04:55.929794 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-04-13 01:04:55.929800 | orchestrator | Sunday 13 April 2025 01:03:49 +0000 (0:00:00.275) 0:03:32.609 ********** 2025-04-13 01:04:55.929805 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:04:55.929810 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:04:55.929816 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:04:55.929821 | orchestrator | 2025-04-13 01:04:55.929826 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-04-13 01:04:55.929835 | orchestrator | Sunday 13 April 2025 01:04:10 +0000 (0:00:21.746) 0:03:54.355 ********** 2025-04-13 01:04:58.938402 | orchestrator | changed: [testbed-node-3] 2025-04-13 01:04:58.938530 | orchestrator | changed: [testbed-node-5] 2025-04-13 01:04:58.938549 | orchestrator | changed: [testbed-node-4] 2025-04-13 01:04:58.938563 | orchestrator | 2025-04-13 01:04:58.938580 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 01:04:58.938598 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-13 01:04:58.938613 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-04-13 01:04:58.938627 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-04-13 01:04:58.938641 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-04-13 01:04:58.938655 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-04-13 01:04:58.938670 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-04-13 01:04:58.938684 | orchestrator | 2025-04-13 01:04:58.938698 | orchestrator | 2025-04-13 01:04:58.938712 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 01:04:58.938757 | orchestrator | Sunday 13 April 2025 01:04:55 +0000 (0:00:44.344) 0:04:38.700 ********** 2025-04-13 01:04:58.938772 | orchestrator | =============================================================================== 2025-04-13 01:04:58.938785 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 44.34s 2025-04-13 01:04:58.938799 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 38.95s 2025-04-13 01:04:58.938813 | orchestrator | neutron : Restart neutron-server container ----------------------------- 21.75s 2025-04-13 01:04:58.938826 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.49s 2025-04-13 01:04:58.938840 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.77s 2025-04-13 01:04:58.938869 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.35s 2025-04-13 01:04:58.938884 | orchestrator | neutron : Check neutron containers -------------------------------------- 6.10s 2025-04-13 01:04:58.938897 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 5.32s 2025-04-13 01:04:58.938911 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.27s 2025-04-13 01:04:58.938925 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 5.12s 2025-04-13 01:04:58.938938 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.10s 2025-04-13 01:04:58.938952 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 5.09s 2025-04-13 01:04:58.938967 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.98s 2025-04-13 01:04:58.938983 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.73s 2025-04-13 01:04:58.938999 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.53s 2025-04-13 01:04:58.939015 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.99s 2025-04-13 01:04:58.939032 | orchestrator | Setting sysctl values --------------------------------------------------- 3.99s 2025-04-13 01:04:58.939047 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 3.81s 2025-04-13 01:04:58.939063 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 3.80s 2025-04-13 01:04:58.939079 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.74s 2025-04-13 01:04:58.939245 | orchestrator | 2025-04-13 01:04:58 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:04:58.940244 | orchestrator | 2025-04-13 01:04:58 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:04:58.940289 | orchestrator | 2025-04-13 01:04:58 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:04:58.941327 | orchestrator | 2025-04-13 01:04:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:04:58.942870 | orchestrator | 2025-04-13 01:04:58 | INFO  | Task 606eec39-10be-4806-b9f5-43d824737ea1 is in state STARTED 2025-04-13 01:04:58.943027 | orchestrator | 2025-04-13 01:04:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:05:01.987078 | orchestrator | 2025-04-13 01:05:01 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:05:01.988518 | orchestrator | 2025-04-13 01:05:01 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:05:01.988560 | orchestrator | 2025-04-13 01:05:01 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:05:01.990919 | orchestrator | 2025-04-13 01:05:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:05:01.992394 | orchestrator | 2025-04-13 01:05:01 | INFO  | Task 606eec39-10be-4806-b9f5-43d824737ea1 is in state STARTED 2025-04-13 01:05:01.992891 | orchestrator | 2025-04-13 01:05:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:05:05.034471 | orchestrator | 2025-04-13 01:05:05 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:05:05.039373 | orchestrator | 2025-04-13 01:05:05 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:05:05.039937 | orchestrator | 2025-04-13 01:05:05 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:05:05.041154 | orchestrator | 2025-04-13 01:05:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:05:05.041930 | orchestrator | 2025-04-13 01:05:05 | INFO  | Task 606eec39-10be-4806-b9f5-43d824737ea1 is in state STARTED 2025-04-13 01:05:08.070556 | orchestrator | 2025-04-13 01:05:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:05:08.070698 | orchestrator | 2025-04-13 01:05:08 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:05:08.071363 | orchestrator | 2025-04-13 01:05:08 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:05:08.071480 | orchestrator | 2025-04-13 01:05:08 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:05:08.071674 | orchestrator | 2025-04-13 01:05:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:05:08.072457 | orchestrator | 2025-04-13 01:05:08 | INFO  | Task 606eec39-10be-4806-b9f5-43d824737ea1 is in state STARTED 2025-04-13 01:05:08.073557 | orchestrator | 2025-04-13 01:05:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:05:11.126900 | orchestrator | 2025-04-13 01:05:11 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:05:11.127441 | orchestrator | 2025-04-13 01:05:11 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:05:11.127489 | orchestrator | 2025-04-13 01:05:11 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:05:11.128171 | orchestrator | 2025-04-13 01:05:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:05:11.128889 | orchestrator | 2025-04-13 01:05:11 | INFO  | Task 606eec39-10be-4806-b9f5-43d824737ea1 is in state STARTED 2025-04-13 01:05:14.173261 | orchestrator | 2025-04-13 01:05:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:05:14.173404 | orchestrator | 2025-04-13 01:05:14 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:05:14.174775 | orchestrator | 2025-04-13 01:05:14 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:05:14.174822 | orchestrator | 2025-04-13 01:05:14 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:05:14.175732 | orchestrator | 2025-04-13 01:05:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:05:14.176965 | orchestrator | 2025-04-13 01:05:14 | INFO  | Task 606eec39-10be-4806-b9f5-43d824737ea1 is in state STARTED 2025-04-13 01:05:17.238589 | orchestrator | 2025-04-13 01:05:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:05:17.238734 | orchestrator | 2025-04-13 01:05:17 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:05:17.240051 | orchestrator | 2025-04-13 01:05:17 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:05:17.241085 | orchestrator | 2025-04-13 01:05:17 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:05:17.242553 | orchestrator | 2025-04-13 01:05:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:05:17.244258 | orchestrator | 2025-04-13 01:05:17 | INFO  | Task 606eec39-10be-4806-b9f5-43d824737ea1 is in state STARTED 2025-04-13 01:05:17.244770 | orchestrator | 2025-04-13 01:05:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:05:20.281766 | orchestrator | 2025-04-13 01:05:20 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:05:20.284626 | orchestrator | 2025-04-13 01:05:20 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:05:20.285591 | orchestrator | 2025-04-13 01:05:20 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:05:20.286322 | orchestrator | 2025-04-13 01:05:20 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:05:20.289035 | orchestrator | 2025-04-13 01:05:20 | INFO  | Task 606eec39-10be-4806-b9f5-43d824737ea1 is in state STARTED 2025-04-13 01:05:23.327760 | orchestrator | 2025-04-13 01:05:20 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:05:23.327932 | orchestrator | 2025-04-13 01:05:23 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:05:23.330397 | orchestrator | 2025-04-13 01:05:23 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:05:23.332642 | orchestrator | 2025-04-13 01:05:23 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state STARTED 2025-04-13 01:05:23.334492 | orchestrator | 2025-04-13 01:05:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:05:23.337205 | orchestrator | 2025-04-13 01:05:23 | INFO  | Task 606eec39-10be-4806-b9f5-43d824737ea1 is in state STARTED 2025-04-13 01:05:26.373449 | orchestrator | 2025-04-13 01:05:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:05:26.373593 | orchestrator | 2025-04-13 01:05:26 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:05:26.374282 | orchestrator | 2025-04-13 01:05:26 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:05:26.374348 | orchestrator | 2025-04-13 01:05:26 | INFO  | Task 7cbea458-7aec-4ad0-8257-d0ec82befa42 is in state SUCCESS 2025-04-13 01:05:26.375555 | orchestrator | 2025-04-13 01:05:26.375590 | orchestrator | 2025-04-13 01:05:26.375608 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 01:05:26.375627 | orchestrator | 2025-04-13 01:05:26.375645 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 01:05:26.375663 | orchestrator | Sunday 13 April 2025 01:03:35 +0000 (0:00:00.212) 0:00:00.212 ********** 2025-04-13 01:05:26.375681 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:05:26.375699 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:05:26.375717 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:05:26.375734 | orchestrator | 2025-04-13 01:05:26.375753 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 01:05:26.375770 | orchestrator | Sunday 13 April 2025 01:03:35 +0000 (0:00:00.282) 0:00:00.494 ********** 2025-04-13 01:05:26.375788 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-04-13 01:05:26.375806 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-04-13 01:05:26.376180 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-04-13 01:05:26.376200 | orchestrator | 2025-04-13 01:05:26.376218 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-04-13 01:05:26.376235 | orchestrator | 2025-04-13 01:05:26.376253 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-04-13 01:05:26.376270 | orchestrator | Sunday 13 April 2025 01:03:36 +0000 (0:00:00.238) 0:00:00.732 ********** 2025-04-13 01:05:26.376288 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:05:26.376336 | orchestrator | 2025-04-13 01:05:26.376355 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-04-13 01:05:26.376372 | orchestrator | Sunday 13 April 2025 01:03:36 +0000 (0:00:00.548) 0:00:01.281 ********** 2025-04-13 01:05:26.376390 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-04-13 01:05:26.376408 | orchestrator | 2025-04-13 01:05:26.376426 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-04-13 01:05:26.376444 | orchestrator | Sunday 13 April 2025 01:03:40 +0000 (0:00:03.496) 0:00:04.777 ********** 2025-04-13 01:05:26.376462 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-04-13 01:05:26.376480 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-04-13 01:05:26.376498 | orchestrator | 2025-04-13 01:05:26.376516 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-04-13 01:05:26.376534 | orchestrator | Sunday 13 April 2025 01:03:46 +0000 (0:00:06.502) 0:00:11.280 ********** 2025-04-13 01:05:26.376549 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-13 01:05:26.376566 | orchestrator | 2025-04-13 01:05:26.376582 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-04-13 01:05:26.376599 | orchestrator | Sunday 13 April 2025 01:03:50 +0000 (0:00:03.600) 0:00:14.880 ********** 2025-04-13 01:05:26.376617 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-13 01:05:26.376634 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-04-13 01:05:26.376667 | orchestrator | 2025-04-13 01:05:26.376685 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-04-13 01:05:26.376702 | orchestrator | Sunday 13 April 2025 01:03:54 +0000 (0:00:04.055) 0:00:18.936 ********** 2025-04-13 01:05:26.376720 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-13 01:05:26.376738 | orchestrator | 2025-04-13 01:05:26.376754 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-04-13 01:05:26.376768 | orchestrator | Sunday 13 April 2025 01:03:57 +0000 (0:00:03.308) 0:00:22.245 ********** 2025-04-13 01:05:26.376783 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-04-13 01:05:26.376798 | orchestrator | 2025-04-13 01:05:26.376813 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-04-13 01:05:26.376828 | orchestrator | Sunday 13 April 2025 01:04:01 +0000 (0:00:04.328) 0:00:26.573 ********** 2025-04-13 01:05:26.376842 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:05:26.376857 | orchestrator | 2025-04-13 01:05:26.376872 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-04-13 01:05:26.376887 | orchestrator | Sunday 13 April 2025 01:04:05 +0000 (0:00:03.395) 0:00:29.969 ********** 2025-04-13 01:05:26.376902 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:05:26.376917 | orchestrator | 2025-04-13 01:05:26.376932 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-04-13 01:05:26.376946 | orchestrator | Sunday 13 April 2025 01:04:09 +0000 (0:00:04.480) 0:00:34.449 ********** 2025-04-13 01:05:26.376961 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:05:26.376976 | orchestrator | 2025-04-13 01:05:26.376990 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-04-13 01:05:26.377005 | orchestrator | Sunday 13 April 2025 01:04:13 +0000 (0:00:04.090) 0:00:38.540 ********** 2025-04-13 01:05:26.377035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-13 01:05:26.377063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-13 01:05:26.377079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-13 01:05:26.377094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:05:26.377125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:05:26.377157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:05:26.377229 | orchestrator | 2025-04-13 01:05:26.377245 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-04-13 01:05:26.377260 | orchestrator | Sunday 13 April 2025 01:04:16 +0000 (0:00:02.208) 0:00:40.748 ********** 2025-04-13 01:05:26.377275 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:05:26.377290 | orchestrator | 2025-04-13 01:05:26.377305 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-04-13 01:05:26.377320 | orchestrator | Sunday 13 April 2025 01:04:16 +0000 (0:00:00.196) 0:00:40.945 ********** 2025-04-13 01:05:26.377334 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:05:26.377350 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:05:26.377365 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:05:26.377380 | orchestrator | 2025-04-13 01:05:26.377394 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-04-13 01:05:26.377409 | orchestrator | Sunday 13 April 2025 01:04:16 +0000 (0:00:00.529) 0:00:41.475 ********** 2025-04-13 01:05:26.377424 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-13 01:05:26.377439 | orchestrator | 2025-04-13 01:05:26.377454 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-04-13 01:05:26.377468 | orchestrator | Sunday 13 April 2025 01:04:17 +0000 (0:00:00.825) 0:00:42.300 ********** 2025-04-13 01:05:26.377484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-13 01:05:26.377499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:05:26.377515 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:05:26.377530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-13 01:05:26.377561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:05:26.377577 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:05:26.377593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-13 01:05:26.377645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:05:26.377661 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:05:26.377676 | orchestrator | 2025-04-13 01:05:26.377691 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-04-13 01:05:26.377706 | orchestrator | Sunday 13 April 2025 01:04:19 +0000 (0:00:02.083) 0:00:44.384 ********** 2025-04-13 01:05:26.377721 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:05:26.377736 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:05:26.377751 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:05:26.377766 | orchestrator | 2025-04-13 01:05:26.377780 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-04-13 01:05:26.377795 | orchestrator | Sunday 13 April 2025 01:04:20 +0000 (0:00:00.693) 0:00:45.077 ********** 2025-04-13 01:05:26.377811 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:05:26.377826 | orchestrator | 2025-04-13 01:05:26.377841 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-04-13 01:05:26.377885 | orchestrator | Sunday 13 April 2025 01:04:22 +0000 (0:00:01.556) 0:00:46.634 ********** 2025-04-13 01:05:26.377901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-13 01:05:26.377924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-13 01:05:26.377940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-13 01:05:26.377968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:05:26.377984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:05:26.378087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:05:26.378106 | orchestrator | 2025-04-13 01:05:26.378174 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-04-13 01:05:26.378192 | orchestrator | Sunday 13 April 2025 01:04:26 +0000 (0:00:04.167) 0:00:50.802 ********** 2025-04-13 01:05:26.378219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-13 01:05:26.378236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:05:26.378252 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:05:26.378284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-13 01:05:26.378310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:05:26.378327 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:05:26.378349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-13 01:05:26.378372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:05:26.378388 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:05:26.378403 | orchestrator | 2025-04-13 01:05:26.378419 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-04-13 01:05:26.378435 | orchestrator | Sunday 13 April 2025 01:04:27 +0000 (0:00:01.564) 0:00:52.366 ********** 2025-04-13 01:05:26.378452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-13 01:05:26.378482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:05:26.378529 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:05:26.378546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-13 01:05:26.378571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:05:26.378587 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:05:26.378604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-13 01:05:26.378632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:05:26.378656 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:05:26.378674 | orchestrator | 2025-04-13 01:05:26.378691 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-04-13 01:05:26.378707 | orchestrator | Sunday 13 April 2025 01:04:29 +0000 (0:00:02.113) 0:00:54.480 ********** 2025-04-13 01:05:26.378724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-13 01:05:26.378741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-13 01:05:26.378781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-13 01:05:26.378798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:05:26.378820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:05:26.378844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:05:26.378861 | orchestrator | 2025-04-13 01:05:26.378878 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-04-13 01:05:26.378893 | orchestrator | Sunday 13 April 2025 01:04:33 +0000 (0:00:03.401) 0:00:57.881 ********** 2025-04-13 01:05:26.378909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-13 01:05:26.378945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-13 01:05:26.378962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-13 01:05:26.378987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:05:26.379004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:05:26.379031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:05:26.379049 | orchestrator | 2025-04-13 01:05:26.379066 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-04-13 01:05:26.379089 | orchestrator | Sunday 13 April 2025 01:04:39 +0000 (0:00:05.998) 0:01:03.880 ********** 2025-04-13 01:05:26.379106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-13 01:05:26.379144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:05:26.379168 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:05:26.379185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-13 01:05:26.379202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:05:26.379219 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:05:26.379254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-13 01:05:26.379271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:05:26.379292 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:05:26.379309 | orchestrator | 2025-04-13 01:05:26.379326 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-04-13 01:05:26.379343 | orchestrator | Sunday 13 April 2025 01:04:40 +0000 (0:00:00.769) 0:01:04.650 ********** 2025-04-13 01:05:26.379359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-13 01:05:26.379376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-13 01:05:26.379403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-13 01:05:26.379427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:05:26.379443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:05:26.379465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:05:26.379481 | orchestrator | 2025-04-13 01:05:26.379497 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-04-13 01:05:26.379514 | orchestrator | Sunday 13 April 2025 01:04:42 +0000 (0:00:02.557) 0:01:07.207 ********** 2025-04-13 01:05:26.379530 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:05:26.379547 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:05:26.379563 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:05:26.379580 | orchestrator | 2025-04-13 01:05:26.379595 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-04-13 01:05:26.379611 | orchestrator | Sunday 13 April 2025 01:04:42 +0000 (0:00:00.325) 0:01:07.533 ********** 2025-04-13 01:05:26.379626 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:05:26.379643 | orchestrator | 2025-04-13 01:05:26.379660 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-04-13 01:05:26.379676 | orchestrator | Sunday 13 April 2025 01:04:45 +0000 (0:00:02.538) 0:01:10.071 ********** 2025-04-13 01:05:26.379693 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:05:26.379709 | orchestrator | 2025-04-13 01:05:26.379726 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-04-13 01:05:26.379743 | orchestrator | Sunday 13 April 2025 01:04:47 +0000 (0:00:02.307) 0:01:12.379 ********** 2025-04-13 01:05:26.379759 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:05:26.379776 | orchestrator | 2025-04-13 01:05:26.379792 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-04-13 01:05:26.379809 | orchestrator | Sunday 13 April 2025 01:05:02 +0000 (0:00:14.829) 0:01:27.209 ********** 2025-04-13 01:05:26.379826 | orchestrator | 2025-04-13 01:05:26.379843 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-04-13 01:05:26.379860 | orchestrator | Sunday 13 April 2025 01:05:02 +0000 (0:00:00.076) 0:01:27.285 ********** 2025-04-13 01:05:26.379877 | orchestrator | 2025-04-13 01:05:26.379894 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-04-13 01:05:26.379911 | orchestrator | Sunday 13 April 2025 01:05:02 +0000 (0:00:00.208) 0:01:27.493 ********** 2025-04-13 01:05:26.379927 | orchestrator | 2025-04-13 01:05:26.379944 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-04-13 01:05:26.379961 | orchestrator | Sunday 13 April 2025 01:05:02 +0000 (0:00:00.062) 0:01:27.556 ********** 2025-04-13 01:05:26.379978 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:05:26.379995 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:05:26.380011 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:05:26.380039 | orchestrator | 2025-04-13 01:05:26.380056 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-04-13 01:05:26.380073 | orchestrator | Sunday 13 April 2025 01:05:16 +0000 (0:00:13.086) 0:01:40.642 ********** 2025-04-13 01:05:26.380091 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:05:26.380107 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:05:26.380144 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:05:26.380161 | orchestrator | 2025-04-13 01:05:26.380178 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 01:05:26.380200 | orchestrator | testbed-node-0 : ok=24  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-04-13 01:05:29.412937 | orchestrator | testbed-node-1 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-13 01:05:29.413063 | orchestrator | testbed-node-2 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-13 01:05:29.413081 | orchestrator | 2025-04-13 01:05:29.413097 | orchestrator | 2025-04-13 01:05:29.413154 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 01:05:29.413171 | orchestrator | Sunday 13 April 2025 01:05:24 +0000 (0:00:08.419) 0:01:49.061 ********** 2025-04-13 01:05:29.413289 | orchestrator | =============================================================================== 2025-04-13 01:05:29.413310 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 14.83s 2025-04-13 01:05:29.413325 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 13.09s 2025-04-13 01:05:29.413339 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 8.42s 2025-04-13 01:05:29.413353 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.50s 2025-04-13 01:05:29.413388 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.00s 2025-04-13 01:05:29.413403 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.48s 2025-04-13 01:05:29.413417 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.33s 2025-04-13 01:05:29.413431 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 4.17s 2025-04-13 01:05:29.413444 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.09s 2025-04-13 01:05:29.413458 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.06s 2025-04-13 01:05:29.413472 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.60s 2025-04-13 01:05:29.413486 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.50s 2025-04-13 01:05:29.413499 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.40s 2025-04-13 01:05:29.413513 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.40s 2025-04-13 01:05:29.413527 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.31s 2025-04-13 01:05:29.413540 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.56s 2025-04-13 01:05:29.413554 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.54s 2025-04-13 01:05:29.413568 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.31s 2025-04-13 01:05:29.413582 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.21s 2025-04-13 01:05:29.413595 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 2.11s 2025-04-13 01:05:29.413610 | orchestrator | 2025-04-13 01:05:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:05:29.413624 | orchestrator | 2025-04-13 01:05:26 | INFO  | Task 606eec39-10be-4806-b9f5-43d824737ea1 is in state STARTED 2025-04-13 01:05:29.413638 | orchestrator | 2025-04-13 01:05:26 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:05:29.413679 | orchestrator | 2025-04-13 01:05:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:05:29.413713 | orchestrator | 2025-04-13 01:05:29 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:05:29.414193 | orchestrator | 2025-04-13 01:05:29 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:05:29.414222 | orchestrator | 2025-04-13 01:05:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:05:29.414245 | orchestrator | 2025-04-13 01:05:29 | INFO  | Task 606eec39-10be-4806-b9f5-43d824737ea1 is in state STARTED 2025-04-13 01:05:29.414876 | orchestrator | 2025-04-13 01:05:29 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:05:32.452181 | orchestrator | 2025-04-13 01:05:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:05:32.452322 | orchestrator | 2025-04-13 01:05:32 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:05:32.452867 | orchestrator | 2025-04-13 01:05:32 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:05:32.452909 | orchestrator | 2025-04-13 01:05:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:05:32.453488 | orchestrator | 2025-04-13 01:05:32 | INFO  | Task 606eec39-10be-4806-b9f5-43d824737ea1 is in state SUCCESS 2025-04-13 01:05:32.455310 | orchestrator | 2025-04-13 01:05:32 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:05:35.497581 | orchestrator | 2025-04-13 01:05:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:05:35.497731 | orchestrator | 2025-04-13 01:05:35 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:05:35.503226 | orchestrator | 2025-04-13 01:05:35 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:05:35.504138 | orchestrator | 2025-04-13 01:05:35 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:05:35.506547 | orchestrator | 2025-04-13 01:05:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:05:35.508301 | orchestrator | 2025-04-13 01:05:35 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:05:38.547797 | orchestrator | 2025-04-13 01:05:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:05:38.547979 | orchestrator | 2025-04-13 01:05:38 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:05:38.552086 | orchestrator | 2025-04-13 01:05:38 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:05:38.553801 | orchestrator | 2025-04-13 01:05:38 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:05:38.554573 | orchestrator | 2025-04-13 01:05:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:05:38.555683 | orchestrator | 2025-04-13 01:05:38 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:05:38.557916 | orchestrator | 2025-04-13 01:05:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:05:41.606007 | orchestrator | 2025-04-13 01:05:41 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:05:41.612443 | orchestrator | 2025-04-13 01:05:41 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:05:41.613621 | orchestrator | 2025-04-13 01:05:41 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:05:41.615527 | orchestrator | 2025-04-13 01:05:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:05:41.616315 | orchestrator | 2025-04-13 01:05:41 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:05:41.616708 | orchestrator | 2025-04-13 01:05:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:05:44.657560 | orchestrator | 2025-04-13 01:05:44 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:05:44.657831 | orchestrator | 2025-04-13 01:05:44 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:05:44.664717 | orchestrator | 2025-04-13 01:05:44 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:05:44.665336 | orchestrator | 2025-04-13 01:05:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:05:44.670601 | orchestrator | 2025-04-13 01:05:44 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:05:47.710072 | orchestrator | 2025-04-13 01:05:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:05:47.710393 | orchestrator | 2025-04-13 01:05:47 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:05:47.711349 | orchestrator | 2025-04-13 01:05:47 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:05:47.711450 | orchestrator | 2025-04-13 01:05:47 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:05:47.711777 | orchestrator | 2025-04-13 01:05:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:05:47.712457 | orchestrator | 2025-04-13 01:05:47 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:05:50.761188 | orchestrator | 2025-04-13 01:05:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:05:50.761350 | orchestrator | 2025-04-13 01:05:50 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:05:50.762306 | orchestrator | 2025-04-13 01:05:50 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:05:50.764501 | orchestrator | 2025-04-13 01:05:50 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:05:50.766369 | orchestrator | 2025-04-13 01:05:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:05:50.769087 | orchestrator | 2025-04-13 01:05:50 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:05:50.770075 | orchestrator | 2025-04-13 01:05:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:05:53.813893 | orchestrator | 2025-04-13 01:05:53 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:05:53.815743 | orchestrator | 2025-04-13 01:05:53 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:05:53.817841 | orchestrator | 2025-04-13 01:05:53 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:05:53.818855 | orchestrator | 2025-04-13 01:05:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:05:53.820332 | orchestrator | 2025-04-13 01:05:53 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:05:53.820656 | orchestrator | 2025-04-13 01:05:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:05:56.872891 | orchestrator | 2025-04-13 01:05:56 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:05:56.873401 | orchestrator | 2025-04-13 01:05:56 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:05:56.875772 | orchestrator | 2025-04-13 01:05:56 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:05:56.876332 | orchestrator | 2025-04-13 01:05:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:05:56.878496 | orchestrator | 2025-04-13 01:05:56 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:05:59.921069 | orchestrator | 2025-04-13 01:05:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:05:59.921257 | orchestrator | 2025-04-13 01:05:59 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:05:59.922259 | orchestrator | 2025-04-13 01:05:59 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:05:59.925545 | orchestrator | 2025-04-13 01:05:59 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:05:59.925780 | orchestrator | 2025-04-13 01:05:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:05:59.929864 | orchestrator | 2025-04-13 01:05:59 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:06:02.978559 | orchestrator | 2025-04-13 01:05:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:06:02.978697 | orchestrator | 2025-04-13 01:06:02 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:06:02.980862 | orchestrator | 2025-04-13 01:06:02 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:06:02.982677 | orchestrator | 2025-04-13 01:06:02 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:06:02.984858 | orchestrator | 2025-04-13 01:06:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:06:02.988088 | orchestrator | 2025-04-13 01:06:02 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:06:06.034294 | orchestrator | 2025-04-13 01:06:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:06:06.034467 | orchestrator | 2025-04-13 01:06:06 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:06:06.036609 | orchestrator | 2025-04-13 01:06:06 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:06:06.036724 | orchestrator | 2025-04-13 01:06:06 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:06:06.037015 | orchestrator | 2025-04-13 01:06:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:06:06.037461 | orchestrator | 2025-04-13 01:06:06 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:06:09.088727 | orchestrator | 2025-04-13 01:06:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:06:09.089035 | orchestrator | 2025-04-13 01:06:09 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:06:09.090463 | orchestrator | 2025-04-13 01:06:09 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:06:09.090509 | orchestrator | 2025-04-13 01:06:09 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:06:09.091586 | orchestrator | 2025-04-13 01:06:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:06:09.092877 | orchestrator | 2025-04-13 01:06:09 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:06:12.136968 | orchestrator | 2025-04-13 01:06:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:06:12.137156 | orchestrator | 2025-04-13 01:06:12 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:06:12.139171 | orchestrator | 2025-04-13 01:06:12 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:06:12.140278 | orchestrator | 2025-04-13 01:06:12 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:06:12.141956 | orchestrator | 2025-04-13 01:06:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:06:12.142932 | orchestrator | 2025-04-13 01:06:12 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:06:12.143337 | orchestrator | 2025-04-13 01:06:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:06:15.191598 | orchestrator | 2025-04-13 01:06:15 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:06:15.193526 | orchestrator | 2025-04-13 01:06:15 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:06:15.194385 | orchestrator | 2025-04-13 01:06:15 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:06:15.197183 | orchestrator | 2025-04-13 01:06:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:06:15.199387 | orchestrator | 2025-04-13 01:06:15 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:06:18.237886 | orchestrator | 2025-04-13 01:06:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:06:18.238258 | orchestrator | 2025-04-13 01:06:18 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:06:18.239340 | orchestrator | 2025-04-13 01:06:18 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:06:18.239441 | orchestrator | 2025-04-13 01:06:18 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state STARTED 2025-04-13 01:06:18.239472 | orchestrator | 2025-04-13 01:06:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:06:18.239922 | orchestrator | 2025-04-13 01:06:18 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:06:21.272567 | orchestrator | 2025-04-13 01:06:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:06:21.272727 | orchestrator | 2025-04-13 01:06:21 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:06:21.273075 | orchestrator | 2025-04-13 01:06:21 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:06:21.274915 | orchestrator | 2025-04-13 01:06:21.274946 | orchestrator | 2025-04-13 01:06:21.274961 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 01:06:21.274976 | orchestrator | 2025-04-13 01:06:21.274990 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 01:06:21.275004 | orchestrator | Sunday 13 April 2025 01:04:59 +0000 (0:00:00.320) 0:00:00.320 ********** 2025-04-13 01:06:21.275018 | orchestrator | ok: [testbed-manager] 2025-04-13 01:06:21.275034 | orchestrator | ok: [testbed-node-3] 2025-04-13 01:06:21.275048 | orchestrator | ok: [testbed-node-4] 2025-04-13 01:06:21.275061 | orchestrator | ok: [testbed-node-5] 2025-04-13 01:06:21.275075 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:06:21.275088 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:06:21.275102 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:06:21.275145 | orchestrator | 2025-04-13 01:06:21.275160 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 01:06:21.275174 | orchestrator | Sunday 13 April 2025 01:05:00 +0000 (0:00:01.013) 0:00:01.334 ********** 2025-04-13 01:06:21.275297 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-04-13 01:06:21.275320 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-04-13 01:06:21.275334 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-04-13 01:06:21.275348 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-04-13 01:06:21.275362 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-04-13 01:06:21.275386 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-04-13 01:06:21.275400 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-04-13 01:06:21.275414 | orchestrator | 2025-04-13 01:06:21.275428 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-04-13 01:06:21.275442 | orchestrator | 2025-04-13 01:06:21.275456 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-04-13 01:06:21.275470 | orchestrator | Sunday 13 April 2025 01:05:01 +0000 (0:00:00.987) 0:00:02.322 ********** 2025-04-13 01:06:21.275485 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:06:21.275500 | orchestrator | 2025-04-13 01:06:21.275514 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-04-13 01:06:21.275528 | orchestrator | Sunday 13 April 2025 01:05:03 +0000 (0:00:01.714) 0:00:04.036 ********** 2025-04-13 01:06:21.275542 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-04-13 01:06:21.275556 | orchestrator | 2025-04-13 01:06:21.275570 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-04-13 01:06:21.275584 | orchestrator | Sunday 13 April 2025 01:05:07 +0000 (0:00:03.544) 0:00:07.581 ********** 2025-04-13 01:06:21.275599 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-04-13 01:06:21.275614 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-04-13 01:06:21.275628 | orchestrator | 2025-04-13 01:06:21.275642 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-04-13 01:06:21.275656 | orchestrator | Sunday 13 April 2025 01:05:12 +0000 (0:00:05.564) 0:00:13.145 ********** 2025-04-13 01:06:21.275669 | orchestrator | ok: [testbed-manager] => (item=service) 2025-04-13 01:06:21.275683 | orchestrator | 2025-04-13 01:06:21.275697 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-04-13 01:06:21.275711 | orchestrator | Sunday 13 April 2025 01:05:15 +0000 (0:00:03.312) 0:00:16.458 ********** 2025-04-13 01:06:21.275725 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-13 01:06:21.275744 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-04-13 01:06:21.275766 | orchestrator | 2025-04-13 01:06:21.275798 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-04-13 01:06:21.275821 | orchestrator | Sunday 13 April 2025 01:05:19 +0000 (0:00:04.038) 0:00:20.496 ********** 2025-04-13 01:06:21.275843 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-04-13 01:06:21.275864 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-04-13 01:06:21.275885 | orchestrator | 2025-04-13 01:06:21.275908 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-04-13 01:06:21.275930 | orchestrator | Sunday 13 April 2025 01:05:25 +0000 (0:00:05.507) 0:00:26.003 ********** 2025-04-13 01:06:21.275955 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-04-13 01:06:21.275979 | orchestrator | 2025-04-13 01:06:21.276003 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 01:06:21.276027 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 01:06:21.276069 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 01:06:21.276101 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 01:06:21.276144 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 01:06:21.276160 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 01:06:21.276189 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 01:06:21.277166 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 01:06:21.277199 | orchestrator | 2025-04-13 01:06:21.277215 | orchestrator | 2025-04-13 01:06:21.277230 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 01:06:21.277388 | orchestrator | Sunday 13 April 2025 01:05:31 +0000 (0:00:06.095) 0:00:32.099 ********** 2025-04-13 01:06:21.277403 | orchestrator | =============================================================================== 2025-04-13 01:06:21.277417 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 6.10s 2025-04-13 01:06:21.277431 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.56s 2025-04-13 01:06:21.277445 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.51s 2025-04-13 01:06:21.277459 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.04s 2025-04-13 01:06:21.277472 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.55s 2025-04-13 01:06:21.277486 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.31s 2025-04-13 01:06:21.277500 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.71s 2025-04-13 01:06:21.277514 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.01s 2025-04-13 01:06:21.277528 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.99s 2025-04-13 01:06:21.277541 | orchestrator | 2025-04-13 01:06:21.277555 | orchestrator | 2025-04-13 01:06:21 | INFO  | Task a9ad4dc4-f097-4638-a344-ea85aaf9638e is in state SUCCESS 2025-04-13 01:06:21.277570 | orchestrator | 2025-04-13 01:06:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:06:21.277590 | orchestrator | 2025-04-13 01:06:21 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:06:24.301978 | orchestrator | 2025-04-13 01:06:21 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:06:24.302238 | orchestrator | 2025-04-13 01:06:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:06:24.302275 | orchestrator | 2025-04-13 01:06:24 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:06:24.302448 | orchestrator | 2025-04-13 01:06:24 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:06:24.302462 | orchestrator | 2025-04-13 01:06:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:06:24.302475 | orchestrator | 2025-04-13 01:06:24 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:06:24.302495 | orchestrator | 2025-04-13 01:06:24 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:06:27.324701 | orchestrator | 2025-04-13 01:06:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:06:27.324898 | orchestrator | 2025-04-13 01:06:27 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:06:27.325540 | orchestrator | 2025-04-13 01:06:27 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:06:27.325569 | orchestrator | 2025-04-13 01:06:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:06:27.326456 | orchestrator | 2025-04-13 01:06:27 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:06:27.327637 | orchestrator | 2025-04-13 01:06:27 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:06:30.358337 | orchestrator | 2025-04-13 01:06:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:06:30.358497 | orchestrator | 2025-04-13 01:06:30 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:06:30.358991 | orchestrator | 2025-04-13 01:06:30 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:06:30.359029 | orchestrator | 2025-04-13 01:06:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:06:30.359444 | orchestrator | 2025-04-13 01:06:30 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:06:30.360461 | orchestrator | 2025-04-13 01:06:30 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:06:33.390302 | orchestrator | 2025-04-13 01:06:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:06:33.390482 | orchestrator | 2025-04-13 01:06:33 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:06:33.391617 | orchestrator | 2025-04-13 01:06:33 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:06:33.391682 | orchestrator | 2025-04-13 01:06:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:06:33.391724 | orchestrator | 2025-04-13 01:06:33 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:06:36.431571 | orchestrator | 2025-04-13 01:06:33 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:06:36.431794 | orchestrator | 2025-04-13 01:06:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:06:36.431860 | orchestrator | 2025-04-13 01:06:36 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:06:36.432318 | orchestrator | 2025-04-13 01:06:36 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:06:36.432348 | orchestrator | 2025-04-13 01:06:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:06:36.432369 | orchestrator | 2025-04-13 01:06:36 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:06:36.433028 | orchestrator | 2025-04-13 01:06:36 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:06:39.467676 | orchestrator | 2025-04-13 01:06:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:06:39.467811 | orchestrator | 2025-04-13 01:06:39 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:06:39.467958 | orchestrator | 2025-04-13 01:06:39 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:06:39.467988 | orchestrator | 2025-04-13 01:06:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:06:39.468449 | orchestrator | 2025-04-13 01:06:39 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:06:39.468944 | orchestrator | 2025-04-13 01:06:39 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:06:42.500652 | orchestrator | 2025-04-13 01:06:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:06:42.500829 | orchestrator | 2025-04-13 01:06:42 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:06:42.501695 | orchestrator | 2025-04-13 01:06:42 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:06:42.502163 | orchestrator | 2025-04-13 01:06:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:06:42.503271 | orchestrator | 2025-04-13 01:06:42 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:06:42.503826 | orchestrator | 2025-04-13 01:06:42 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:06:42.503876 | orchestrator | 2025-04-13 01:06:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:06:45.535147 | orchestrator | 2025-04-13 01:06:45 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:06:45.537369 | orchestrator | 2025-04-13 01:06:45 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:06:45.540391 | orchestrator | 2025-04-13 01:06:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:06:45.542481 | orchestrator | 2025-04-13 01:06:45 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:06:45.544090 | orchestrator | 2025-04-13 01:06:45 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:06:48.569725 | orchestrator | 2025-04-13 01:06:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:06:48.569871 | orchestrator | 2025-04-13 01:06:48 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:06:48.570230 | orchestrator | 2025-04-13 01:06:48 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:06:48.570267 | orchestrator | 2025-04-13 01:06:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:06:48.572055 | orchestrator | 2025-04-13 01:06:48 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:06:48.572274 | orchestrator | 2025-04-13 01:06:48 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:06:51.600203 | orchestrator | 2025-04-13 01:06:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:06:51.600341 | orchestrator | 2025-04-13 01:06:51 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:06:51.600573 | orchestrator | 2025-04-13 01:06:51 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:06:51.600609 | orchestrator | 2025-04-13 01:06:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:06:51.601031 | orchestrator | 2025-04-13 01:06:51 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:06:51.601435 | orchestrator | 2025-04-13 01:06:51 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:06:54.625455 | orchestrator | 2025-04-13 01:06:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:06:54.625583 | orchestrator | 2025-04-13 01:06:54 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:06:54.625803 | orchestrator | 2025-04-13 01:06:54 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:06:54.625836 | orchestrator | 2025-04-13 01:06:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:06:54.626235 | orchestrator | 2025-04-13 01:06:54 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:06:54.626638 | orchestrator | 2025-04-13 01:06:54 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:06:54.629997 | orchestrator | 2025-04-13 01:06:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:06:57.653623 | orchestrator | 2025-04-13 01:06:57 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:06:57.654006 | orchestrator | 2025-04-13 01:06:57 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:06:57.654113 | orchestrator | 2025-04-13 01:06:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:06:57.654183 | orchestrator | 2025-04-13 01:06:57 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:06:57.654570 | orchestrator | 2025-04-13 01:06:57 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:07:00.687518 | orchestrator | 2025-04-13 01:06:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:07:00.687679 | orchestrator | 2025-04-13 01:07:00 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:07:00.687819 | orchestrator | 2025-04-13 01:07:00 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:07:00.688767 | orchestrator | 2025-04-13 01:07:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:07:00.689166 | orchestrator | 2025-04-13 01:07:00 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:07:00.689196 | orchestrator | 2025-04-13 01:07:00 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:07:00.689878 | orchestrator | 2025-04-13 01:07:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:07:03.720619 | orchestrator | 2025-04-13 01:07:03 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:07:03.721284 | orchestrator | 2025-04-13 01:07:03 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:07:03.724383 | orchestrator | 2025-04-13 01:07:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:07:03.724664 | orchestrator | 2025-04-13 01:07:03 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:07:03.724696 | orchestrator | 2025-04-13 01:07:03 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:07:03.724889 | orchestrator | 2025-04-13 01:07:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:07:06.760671 | orchestrator | 2025-04-13 01:07:06 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:07:06.761492 | orchestrator | 2025-04-13 01:07:06 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:07:06.761622 | orchestrator | 2025-04-13 01:07:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:07:06.762232 | orchestrator | 2025-04-13 01:07:06 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:07:06.762274 | orchestrator | 2025-04-13 01:07:06 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:07:09.788934 | orchestrator | 2025-04-13 01:07:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:07:09.789066 | orchestrator | 2025-04-13 01:07:09 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:07:09.790177 | orchestrator | 2025-04-13 01:07:09 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:07:09.790239 | orchestrator | 2025-04-13 01:07:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:07:09.790379 | orchestrator | 2025-04-13 01:07:09 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:07:09.791251 | orchestrator | 2025-04-13 01:07:09 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:07:12.825540 | orchestrator | 2025-04-13 01:07:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:07:12.825686 | orchestrator | 2025-04-13 01:07:12 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:07:12.825972 | orchestrator | 2025-04-13 01:07:12 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:07:12.826001 | orchestrator | 2025-04-13 01:07:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:07:12.826067 | orchestrator | 2025-04-13 01:07:12 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:07:12.826376 | orchestrator | 2025-04-13 01:07:12 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:07:12.826481 | orchestrator | 2025-04-13 01:07:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:07:15.852785 | orchestrator | 2025-04-13 01:07:15 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:07:15.853265 | orchestrator | 2025-04-13 01:07:15 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:07:15.856201 | orchestrator | 2025-04-13 01:07:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:07:15.859679 | orchestrator | 2025-04-13 01:07:15 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:07:15.861671 | orchestrator | 2025-04-13 01:07:15 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:07:15.862523 | orchestrator | 2025-04-13 01:07:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:07:18.906384 | orchestrator | 2025-04-13 01:07:18 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:07:18.906656 | orchestrator | 2025-04-13 01:07:18 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:07:18.907644 | orchestrator | 2025-04-13 01:07:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:07:18.910322 | orchestrator | 2025-04-13 01:07:18 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:07:21.933279 | orchestrator | 2025-04-13 01:07:18 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:07:21.933503 | orchestrator | 2025-04-13 01:07:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:07:21.933538 | orchestrator | 2025-04-13 01:07:21 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:07:21.934144 | orchestrator | 2025-04-13 01:07:21 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:07:21.934168 | orchestrator | 2025-04-13 01:07:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:07:21.934781 | orchestrator | 2025-04-13 01:07:21 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:07:21.936539 | orchestrator | 2025-04-13 01:07:21 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:07:24.968777 | orchestrator | 2025-04-13 01:07:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:07:24.968988 | orchestrator | 2025-04-13 01:07:24 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:07:24.970863 | orchestrator | 2025-04-13 01:07:24 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:07:24.970895 | orchestrator | 2025-04-13 01:07:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:07:24.970915 | orchestrator | 2025-04-13 01:07:24 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:07:24.972591 | orchestrator | 2025-04-13 01:07:24 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:07:28.025037 | orchestrator | 2025-04-13 01:07:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:07:28.025250 | orchestrator | 2025-04-13 01:07:28 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:07:28.026109 | orchestrator | 2025-04-13 01:07:28 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:07:28.027027 | orchestrator | 2025-04-13 01:07:28 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:07:28.028892 | orchestrator | 2025-04-13 01:07:28 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:07:28.033601 | orchestrator | 2025-04-13 01:07:28 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:07:31.067475 | orchestrator | 2025-04-13 01:07:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:07:31.067626 | orchestrator | 2025-04-13 01:07:31 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:07:31.072548 | orchestrator | 2025-04-13 01:07:31 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:07:31.073180 | orchestrator | 2025-04-13 01:07:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:07:31.074099 | orchestrator | 2025-04-13 01:07:31 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:07:31.077556 | orchestrator | 2025-04-13 01:07:31 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:07:34.105199 | orchestrator | 2025-04-13 01:07:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:07:34.105336 | orchestrator | 2025-04-13 01:07:34 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:07:34.106750 | orchestrator | 2025-04-13 01:07:34 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:07:34.107205 | orchestrator | 2025-04-13 01:07:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:07:34.107636 | orchestrator | 2025-04-13 01:07:34 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:07:34.108434 | orchestrator | 2025-04-13 01:07:34 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:07:37.144019 | orchestrator | 2025-04-13 01:07:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:07:37.144227 | orchestrator | 2025-04-13 01:07:37 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:07:37.144521 | orchestrator | 2025-04-13 01:07:37 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:07:37.145029 | orchestrator | 2025-04-13 01:07:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:07:37.145519 | orchestrator | 2025-04-13 01:07:37 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:07:37.146556 | orchestrator | 2025-04-13 01:07:37 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:07:40.198083 | orchestrator | 2025-04-13 01:07:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:07:40.198279 | orchestrator | 2025-04-13 01:07:40 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:07:40.199313 | orchestrator | 2025-04-13 01:07:40 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:07:40.201226 | orchestrator | 2025-04-13 01:07:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:07:40.201729 | orchestrator | 2025-04-13 01:07:40 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:07:40.202751 | orchestrator | 2025-04-13 01:07:40 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:07:43.239482 | orchestrator | 2025-04-13 01:07:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:07:43.239575 | orchestrator | 2025-04-13 01:07:43 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:07:43.241612 | orchestrator | 2025-04-13 01:07:43 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:07:43.242227 | orchestrator | 2025-04-13 01:07:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:07:43.243089 | orchestrator | 2025-04-13 01:07:43 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:07:43.244888 | orchestrator | 2025-04-13 01:07:43 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:07:43.245082 | orchestrator | 2025-04-13 01:07:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:07:46.297342 | orchestrator | 2025-04-13 01:07:46 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:07:46.298117 | orchestrator | 2025-04-13 01:07:46 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:07:46.298195 | orchestrator | 2025-04-13 01:07:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:07:46.298218 | orchestrator | 2025-04-13 01:07:46 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:07:46.300016 | orchestrator | 2025-04-13 01:07:46 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:07:49.355102 | orchestrator | 2025-04-13 01:07:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:07:49.355317 | orchestrator | 2025-04-13 01:07:49 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:07:49.357555 | orchestrator | 2025-04-13 01:07:49 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:07:49.358367 | orchestrator | 2025-04-13 01:07:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:07:49.359981 | orchestrator | 2025-04-13 01:07:49 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:07:49.363172 | orchestrator | 2025-04-13 01:07:49 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:07:52.417593 | orchestrator | 2025-04-13 01:07:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:07:52.417738 | orchestrator | 2025-04-13 01:07:52 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:07:52.421036 | orchestrator | 2025-04-13 01:07:52 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:07:52.422698 | orchestrator | 2025-04-13 01:07:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:07:52.422827 | orchestrator | 2025-04-13 01:07:52 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:07:52.422867 | orchestrator | 2025-04-13 01:07:52 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:07:55.470507 | orchestrator | 2025-04-13 01:07:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:07:55.470689 | orchestrator | 2025-04-13 01:07:55 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:07:55.473474 | orchestrator | 2025-04-13 01:07:55 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:07:55.475361 | orchestrator | 2025-04-13 01:07:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:07:55.477575 | orchestrator | 2025-04-13 01:07:55 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:07:55.478618 | orchestrator | 2025-04-13 01:07:55 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:07:55.479310 | orchestrator | 2025-04-13 01:07:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:07:58.525884 | orchestrator | 2025-04-13 01:07:58 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:07:58.527325 | orchestrator | 2025-04-13 01:07:58 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:07:58.530266 | orchestrator | 2025-04-13 01:07:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:07:58.531108 | orchestrator | 2025-04-13 01:07:58 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:07:58.533477 | orchestrator | 2025-04-13 01:07:58 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:08:01.577890 | orchestrator | 2025-04-13 01:07:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:08:01.578089 | orchestrator | 2025-04-13 01:08:01 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:08:01.580875 | orchestrator | 2025-04-13 01:08:01 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:08:01.584409 | orchestrator | 2025-04-13 01:08:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:08:01.586180 | orchestrator | 2025-04-13 01:08:01 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:08:01.587395 | orchestrator | 2025-04-13 01:08:01 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:08:04.647714 | orchestrator | 2025-04-13 01:08:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:08:04.647856 | orchestrator | 2025-04-13 01:08:04 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:08:04.650604 | orchestrator | 2025-04-13 01:08:04 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state STARTED 2025-04-13 01:08:04.651950 | orchestrator | 2025-04-13 01:08:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:08:04.651982 | orchestrator | 2025-04-13 01:08:04 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:08:04.653619 | orchestrator | 2025-04-13 01:08:04 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:08:07.698869 | orchestrator | 2025-04-13 01:08:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:08:07.699022 | orchestrator | 2025-04-13 01:08:07 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:08:07.701554 | orchestrator | 2025-04-13 01:08:07 | INFO  | Task d669e92e-20ad-4f0e-9eee-31482984aef1 is in state SUCCESS 2025-04-13 01:08:07.703240 | orchestrator | 2025-04-13 01:08:07.703282 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-04-13 01:08:07.703298 | orchestrator | 2025-04-13 01:08:07.703313 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-04-13 01:08:07.703327 | orchestrator | Sunday 13 April 2025 01:00:16 +0000 (0:00:00.173) 0:00:00.173 ********** 2025-04-13 01:08:07.703341 | orchestrator | changed: [localhost] 2025-04-13 01:08:07.703357 | orchestrator | 2025-04-13 01:08:07.703371 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-04-13 01:08:07.703385 | orchestrator | Sunday 13 April 2025 01:00:17 +0000 (0:00:00.670) 0:00:00.844 ********** 2025-04-13 01:08:07.703399 | orchestrator | 2025-04-13 01:08:07.703412 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-13 01:08:07.703426 | orchestrator | 2025-04-13 01:08:07.703458 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-13 01:08:07.703473 | orchestrator | 2025-04-13 01:08:07.703487 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-13 01:08:07.703500 | orchestrator | 2025-04-13 01:08:07.703514 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-13 01:08:07.703528 | orchestrator | 2025-04-13 01:08:07.703541 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-13 01:08:07.703555 | orchestrator | 2025-04-13 01:08:07.703569 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-13 01:08:07.703582 | orchestrator | 2025-04-13 01:08:07.703596 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-13 01:08:07.703610 | orchestrator | changed: [localhost] 2025-04-13 01:08:07.703624 | orchestrator | 2025-04-13 01:08:07.703638 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-04-13 01:08:07.703651 | orchestrator | Sunday 13 April 2025 01:06:03 +0000 (0:05:46.017) 0:05:46.862 ********** 2025-04-13 01:08:07.703665 | orchestrator | changed: [localhost] 2025-04-13 01:08:07.703679 | orchestrator | 2025-04-13 01:08:07.703700 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 01:08:07.703714 | orchestrator | 2025-04-13 01:08:07.703729 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 01:08:07.703743 | orchestrator | Sunday 13 April 2025 01:06:16 +0000 (0:00:13.195) 0:06:00.057 ********** 2025-04-13 01:08:07.703756 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:08:07.703770 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:08:07.703784 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:08:07.703798 | orchestrator | 2025-04-13 01:08:07.703812 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 01:08:07.703826 | orchestrator | Sunday 13 April 2025 01:06:17 +0000 (0:00:01.102) 0:06:01.160 ********** 2025-04-13 01:08:07.703840 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-04-13 01:08:07.703854 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-04-13 01:08:07.703867 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-04-13 01:08:07.703881 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-04-13 01:08:07.703895 | orchestrator | 2025-04-13 01:08:07.703909 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-04-13 01:08:07.703923 | orchestrator | skipping: no hosts matched 2025-04-13 01:08:07.703943 | orchestrator | 2025-04-13 01:08:07.703957 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 01:08:07.703971 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 01:08:07.703988 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 01:08:07.704018 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 01:08:07.704032 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 01:08:07.704046 | orchestrator | 2025-04-13 01:08:07.704060 | orchestrator | 2025-04-13 01:08:07.704074 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 01:08:07.704088 | orchestrator | Sunday 13 April 2025 01:06:19 +0000 (0:00:01.098) 0:06:02.258 ********** 2025-04-13 01:08:07.704102 | orchestrator | =============================================================================== 2025-04-13 01:08:07.704115 | orchestrator | Download ironic-agent initramfs --------------------------------------- 346.02s 2025-04-13 01:08:07.704150 | orchestrator | Download ironic-agent kernel ------------------------------------------- 13.20s 2025-04-13 01:08:07.704164 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.10s 2025-04-13 01:08:07.704178 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.10s 2025-04-13 01:08:07.704192 | orchestrator | Ensure the destination directory exists --------------------------------- 0.67s 2025-04-13 01:08:07.704206 | orchestrator | 2025-04-13 01:08:07.704219 | orchestrator | 2025-04-13 01:08:07.704296 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 01:08:07.704312 | orchestrator | 2025-04-13 01:08:07.704414 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 01:08:07.704430 | orchestrator | Sunday 13 April 2025 01:03:49 +0000 (0:00:00.319) 0:00:00.319 ********** 2025-04-13 01:08:07.704444 | orchestrator | ok: [testbed-manager] 2025-04-13 01:08:07.704459 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:08:07.704473 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:08:07.704486 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:08:07.704500 | orchestrator | ok: [testbed-node-3] 2025-04-13 01:08:07.704514 | orchestrator | ok: [testbed-node-4] 2025-04-13 01:08:07.704528 | orchestrator | ok: [testbed-node-5] 2025-04-13 01:08:07.704541 | orchestrator | 2025-04-13 01:08:07.704555 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 01:08:07.704569 | orchestrator | Sunday 13 April 2025 01:03:50 +0000 (0:00:01.150) 0:00:01.470 ********** 2025-04-13 01:08:07.704595 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-04-13 01:08:07.704610 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-04-13 01:08:07.704625 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-04-13 01:08:07.704639 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-04-13 01:08:07.704653 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-04-13 01:08:07.704667 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-04-13 01:08:07.704681 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-04-13 01:08:07.704695 | orchestrator | 2025-04-13 01:08:07.704710 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-04-13 01:08:07.704723 | orchestrator | 2025-04-13 01:08:07.704737 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-04-13 01:08:07.704751 | orchestrator | Sunday 13 April 2025 01:03:51 +0000 (0:00:01.124) 0:00:02.594 ********** 2025-04-13 01:08:07.704765 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 01:08:07.704780 | orchestrator | 2025-04-13 01:08:07.704794 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-04-13 01:08:07.704807 | orchestrator | Sunday 13 April 2025 01:03:53 +0000 (0:00:01.579) 0:00:04.174 ********** 2025-04-13 01:08:07.704824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-13 01:08:07.704855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-13 01:08:07.704906 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-13 01:08:07.704957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.704983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.705058 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-13 01:08:07.705084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-13 01:08:07.705099 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.705160 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.705180 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.705270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.705296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.705312 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-13 01:08:07.705336 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.705351 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.705379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.705395 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-13 01:08:07.705410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.705432 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.705448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.705471 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.705485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.705513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.705528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.705543 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.705566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-13 01:08:07.705622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 01:08:07.705648 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.705672 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.705695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.705719 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-13 01:08:07.705767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.705799 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 01:08:07.705822 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.705843 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.706227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.706274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.706317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-13 01:08:07.707288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 01:08:07.707401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.707425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-13 01:08:07.707444 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.707487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 01:08:07.707577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-13 01:08:07.707596 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 01:08:07.707607 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.707617 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.707626 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.707645 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.707672 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.707684 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.707694 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.707704 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.707713 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-13 01:08:07.707723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.707754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 01:08:07.707765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.707776 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.707787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.707798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.707809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.707820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.707842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.707860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.707872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.707882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.707894 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.707905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.707916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.707942 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.707968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-13 01:08:07.707986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 01:08:07.708004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.708020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.708031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.708063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.708089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.708106 | orchestrator | 2025-04-13 01:08:07.708147 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-04-13 01:08:07.708166 | orchestrator | Sunday 13 April 2025 01:03:57 +0000 (0:00:03.974) 0:00:08.148 ********** 2025-04-13 01:08:07.708183 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 01:08:07.708199 | orchestrator | 2025-04-13 01:08:07.708216 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-04-13 01:08:07.708226 | orchestrator | Sunday 13 April 2025 01:03:58 +0000 (0:00:01.633) 0:00:09.782 ********** 2025-04-13 01:08:07.708236 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-13 01:08:07.708247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.708256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.708266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.708283 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.708309 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.708320 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.708330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.708339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.708349 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.708358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.708378 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.708397 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.708420 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.708435 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.708452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.708467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.708482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.708506 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.708536 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.708561 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-13 01:08:07.708577 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.708595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.708612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.708628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.708664 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.708681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.708697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.708722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.708739 | orchestrator | 2025-04-13 01:08:07.708755 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-04-13 01:08:07.708772 | orchestrator | Sunday 13 April 2025 01:04:04 +0000 (0:00:05.510) 0:00:15.292 ********** 2025-04-13 01:08:07.708787 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-13 01:08:07.708803 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-13 01:08:07.708827 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.708844 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-13 01:08:07.708873 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.708903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-13 01:08:07.708921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.708938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.708954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.708977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.708994 | orchestrator | skipping: [testbed-manager] 2025-04-13 01:08:07.709011 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:07.709027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-13 01:08:07.709054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.709080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.709097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.709113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.709150 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:07.709167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-13 01:08:07.709191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.709220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.709237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.709253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.709271 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:07.709295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-13 01:08:07.709312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.709329 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.709354 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:08:07.709370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-13 01:08:07.709386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.709415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.709433 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:08:07.709449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-13 01:08:07.709473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.709491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.709510 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:08:07.709526 | orchestrator | 2025-04-13 01:08:07.709542 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-04-13 01:08:07.709572 | orchestrator | Sunday 13 April 2025 01:04:06 +0000 (0:00:02.086) 0:00:17.379 ********** 2025-04-13 01:08:07.709590 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-13 01:08:07.709608 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-13 01:08:07.709625 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.709657 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-13 01:08:07.709685 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.709702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-13 01:08:07.709725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.709741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.709767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.709783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.709799 | orchestrator | skipping: [testbed-manager] 2025-04-13 01:08:07.709815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-13 01:08:07.709831 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:07.709847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.709872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.710664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.710717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.710735 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:07.710752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-13 01:08:07.710793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.710811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.710828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.710938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.710961 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:07.711002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-13 01:08:07.711020 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.711036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.711067 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:08:07.711083 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-13 01:08:07.711100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.711116 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.711154 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:08:07.711171 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-13 01:08:07.711281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.711303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.711320 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:08:07.711335 | orchestrator | 2025-04-13 01:08:07.711352 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-04-13 01:08:07.711368 | orchestrator | Sunday 13 April 2025 01:04:08 +0000 (0:00:02.672) 0:00:20.052 ********** 2025-04-13 01:08:07.711401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-13 01:08:07.711419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-13 01:08:07.711436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-13 01:08:07.711485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-13 01:08:07.711512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.711541 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-13 01:08:07.711558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.711573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.711590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-13 01:08:07.711638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-13 01:08:07.711664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.711693 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.711710 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.711726 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.711742 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.711758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.711774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.711834 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.711863 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.711880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.711897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.711913 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.711929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.711946 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.711985 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.712034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.712052 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.712069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-13 01:08:07.712086 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 01:08:07.712102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.712147 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.712213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.712232 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.712249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-13 01:08:07.712266 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 01:08:07.712283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.712307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.712336 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.712384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-13 01:08:07.712402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 01:08:07.712419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.712435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.712472 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.712489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.712542 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-13 01:08:07.712562 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 01:08:07.712579 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.712595 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.712630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.712647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.712689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-13 01:08:07.712701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 01:08:07.712711 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.712735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.712746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-13 01:08:07.712778 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.712789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 01:08:07.712799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.712817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-13 01:08:07.712836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 01:08:07.712865 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.712877 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.712887 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.712896 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.712913 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.712929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.712938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.712948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.712977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.712988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.712998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.713015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.713031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.713041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.713050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.713060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.713091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.713103 | orchestrator | 2025-04-13 01:08:07.713113 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-04-13 01:08:07.713151 | orchestrator | Sunday 13 April 2025 01:04:16 +0000 (0:00:07.484) 0:00:27.536 ********** 2025-04-13 01:08:07.713169 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-13 01:08:07.713184 | orchestrator | 2025-04-13 01:08:07.713200 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-04-13 01:08:07.713216 | orchestrator | Sunday 13 April 2025 01:04:17 +0000 (0:00:00.571) 0:00:28.108 ********** 2025-04-13 01:08:07.713231 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1072097, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4088957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713260 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1072097, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4088957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713289 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1072097, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4088957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713305 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1072097, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4088957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713321 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1072097, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4088957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713370 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1072097, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4088957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713387 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1072111, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4108958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713414 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1072111, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4108958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713439 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1072111, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4108958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713454 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1072111, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4108958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713469 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1072111, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4108958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713485 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1072111, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4108958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713500 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1072097, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4088957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 01:08:07.713560 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1072102, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4088957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713578 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1072102, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4088957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713604 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1072102, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4088957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713618 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1072102, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4088957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713632 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1072102, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4088957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713647 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1072102, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4088957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713662 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072108, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4098957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713709 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072108, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4098957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713738 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072108, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4098957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713762 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072108, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4098957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713772 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072108, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4098957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713781 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072108, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4098957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713790 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072129, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.415896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713799 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072129, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.415896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713837 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072129, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.415896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713853 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072129, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.415896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713863 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072129, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.415896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713872 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072129, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.415896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713881 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072115, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4118958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713890 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1072111, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4108958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 01:08:07.713899 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072115, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4118958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713937 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072115, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4118958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713953 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072115, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4118958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713962 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072115, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4118958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713971 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072107, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4098957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713981 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072115, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4118958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713990 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072107, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4098957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.713999 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072107, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4098957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714060 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072107, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4098957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714079 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072107, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4098957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714088 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072107, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4098957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714097 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072114, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4118958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714106 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072114, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4118958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714115 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072114, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4118958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714226 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1072102, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4088957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 01:08:07.714312 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072114, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4118958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714332 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072128, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.415896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714341 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072114, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4118958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714350 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072114, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4118958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714359 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072128, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.415896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714368 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072128, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.415896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714385 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072104, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4098957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714419 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072128, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.415896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714430 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072128, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.415896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714439 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072128, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.415896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714448 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072104, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4098957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714456 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1072119, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4138958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714465 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:07.714475 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072104, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4098957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714491 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072104, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4098957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714548 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072104, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4098957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714560 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1072119, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4138958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714569 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:07.714578 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072104, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4098957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714587 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1072119, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4138958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714596 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:08:07.714605 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1072119, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4138958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714613 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:08:07.714630 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1072119, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4138958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714645 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:08:07.714655 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072108, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4098957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 01:08:07.714684 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1072119, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4138958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-13 01:08:07.714695 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:07.714704 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072129, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.415896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 01:08:07.714713 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072115, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4118958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 01:08:07.714722 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072107, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4098957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 01:08:07.714731 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072114, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4118958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 01:08:07.714747 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072128, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.415896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 01:08:07.714763 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072104, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4098957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 01:08:07.714791 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1072119, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.4138958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-13 01:08:07.714802 | orchestrator | 2025-04-13 01:08:07.714811 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-04-13 01:08:07.714820 | orchestrator | Sunday 13 April 2025 01:04:55 +0000 (0:00:38.266) 0:01:06.374 ********** 2025-04-13 01:08:07.714829 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-13 01:08:07.714837 | orchestrator | 2025-04-13 01:08:07.714846 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-04-13 01:08:07.714855 | orchestrator | Sunday 13 April 2025 01:04:55 +0000 (0:00:00.462) 0:01:06.837 ********** 2025-04-13 01:08:07.714863 | orchestrator | [WARNING]: Skipped 2025-04-13 01:08:07.714872 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-13 01:08:07.714881 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-04-13 01:08:07.714890 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-13 01:08:07.714898 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-04-13 01:08:07.714907 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-13 01:08:07.714916 | orchestrator | [WARNING]: Skipped 2025-04-13 01:08:07.714925 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-13 01:08:07.714933 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-04-13 01:08:07.714942 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-13 01:08:07.714950 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-04-13 01:08:07.714959 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-13 01:08:07.714968 | orchestrator | [WARNING]: Skipped 2025-04-13 01:08:07.714977 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-13 01:08:07.714985 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-04-13 01:08:07.714994 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-13 01:08:07.715002 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-04-13 01:08:07.715011 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-13 01:08:07.715020 | orchestrator | [WARNING]: Skipped 2025-04-13 01:08:07.715028 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-13 01:08:07.715037 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-04-13 01:08:07.715051 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-13 01:08:07.715059 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-04-13 01:08:07.715069 | orchestrator | [WARNING]: Skipped 2025-04-13 01:08:07.715077 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-13 01:08:07.715086 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-04-13 01:08:07.715094 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-13 01:08:07.715103 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-04-13 01:08:07.715111 | orchestrator | [WARNING]: Skipped 2025-04-13 01:08:07.715120 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-13 01:08:07.715153 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-04-13 01:08:07.715168 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-13 01:08:07.715181 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-04-13 01:08:07.715196 | orchestrator | [WARNING]: Skipped 2025-04-13 01:08:07.715205 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-13 01:08:07.715213 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-04-13 01:08:07.715222 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-13 01:08:07.715230 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-04-13 01:08:07.715239 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-13 01:08:07.715248 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-13 01:08:07.715256 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-13 01:08:07.715265 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-13 01:08:07.715273 | orchestrator | 2025-04-13 01:08:07.715282 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-04-13 01:08:07.715291 | orchestrator | Sunday 13 April 2025 01:04:57 +0000 (0:00:01.527) 0:01:08.364 ********** 2025-04-13 01:08:07.715299 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-13 01:08:07.715309 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:07.715318 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-13 01:08:07.715326 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:08:07.715335 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-13 01:08:07.715349 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:07.715392 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-13 01:08:07.715405 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:07.715419 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-13 01:08:07.715433 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-13 01:08:07.715446 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:08:07.715460 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:08:07.715474 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-04-13 01:08:07.715488 | orchestrator | 2025-04-13 01:08:07.715502 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-04-13 01:08:07.715521 | orchestrator | Sunday 13 April 2025 01:05:14 +0000 (0:00:16.809) 0:01:25.174 ********** 2025-04-13 01:08:07.715536 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-13 01:08:07.715560 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:07.715575 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-13 01:08:07.715597 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:07.715611 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-13 01:08:07.715625 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:07.715635 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-13 01:08:07.715644 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:08:07.715653 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-13 01:08:07.715661 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:08:07.715670 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-13 01:08:07.715679 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:08:07.715687 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-04-13 01:08:07.715696 | orchestrator | 2025-04-13 01:08:07.715705 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-04-13 01:08:07.715713 | orchestrator | Sunday 13 April 2025 01:05:20 +0000 (0:00:06.666) 0:01:31.840 ********** 2025-04-13 01:08:07.715722 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-13 01:08:07.715731 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:07.715740 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-13 01:08:07.715749 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:07.715757 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-13 01:08:07.715766 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:07.715775 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-13 01:08:07.715783 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:08:07.715793 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-13 01:08:07.715801 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:08:07.715810 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-13 01:08:07.715818 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:08:07.715827 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-04-13 01:08:07.715836 | orchestrator | 2025-04-13 01:08:07.715849 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-04-13 01:08:07.715857 | orchestrator | Sunday 13 April 2025 01:05:23 +0000 (0:00:02.993) 0:01:34.834 ********** 2025-04-13 01:08:07.715866 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-13 01:08:07.715874 | orchestrator | 2025-04-13 01:08:07.715883 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-04-13 01:08:07.715891 | orchestrator | Sunday 13 April 2025 01:05:24 +0000 (0:00:00.429) 0:01:35.264 ********** 2025-04-13 01:08:07.715900 | orchestrator | skipping: [testbed-manager] 2025-04-13 01:08:07.715908 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:07.715917 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:07.715926 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:07.715934 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:08:07.715943 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:08:07.715951 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:08:07.715959 | orchestrator | 2025-04-13 01:08:07.715968 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-04-13 01:08:07.715977 | orchestrator | Sunday 13 April 2025 01:05:24 +0000 (0:00:00.657) 0:01:35.922 ********** 2025-04-13 01:08:07.715990 | orchestrator | skipping: [testbed-manager] 2025-04-13 01:08:07.715998 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:08:07.716007 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:08:07.716015 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:08:07.716024 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:08:07.716032 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:08:07.716041 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:08:07.716050 | orchestrator | 2025-04-13 01:08:07.716063 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-04-13 01:08:07.716073 | orchestrator | Sunday 13 April 2025 01:05:29 +0000 (0:00:04.202) 0:01:40.124 ********** 2025-04-13 01:08:07.716081 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-13 01:08:07.716090 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:07.716106 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-13 01:08:07.716115 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:07.716152 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-13 01:08:07.716162 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:07.716175 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-13 01:08:07.716184 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:08:07.716193 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-13 01:08:07.716202 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:08:07.716210 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-13 01:08:07.716219 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:08:07.716228 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-13 01:08:07.716236 | orchestrator | skipping: [testbed-manager] 2025-04-13 01:08:07.716245 | orchestrator | 2025-04-13 01:08:07.716253 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-04-13 01:08:07.716262 | orchestrator | Sunday 13 April 2025 01:05:32 +0000 (0:00:03.623) 0:01:43.748 ********** 2025-04-13 01:08:07.716271 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-13 01:08:07.716280 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:07.716292 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-13 01:08:07.716301 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:07.716309 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-13 01:08:07.716318 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:07.716327 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-13 01:08:07.716335 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:08:07.716344 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-13 01:08:07.716353 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:08:07.716361 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-13 01:08:07.716370 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:08:07.716378 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-04-13 01:08:07.716387 | orchestrator | 2025-04-13 01:08:07.716396 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-04-13 01:08:07.716404 | orchestrator | Sunday 13 April 2025 01:05:36 +0000 (0:00:04.168) 0:01:47.916 ********** 2025-04-13 01:08:07.716417 | orchestrator | [WARNING]: Skipped 2025-04-13 01:08:07.716426 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-04-13 01:08:07.716435 | orchestrator | due to this access issue: 2025-04-13 01:08:07.716443 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-04-13 01:08:07.716452 | orchestrator | not a directory 2025-04-13 01:08:07.716465 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-13 01:08:07.716474 | orchestrator | 2025-04-13 01:08:07.716483 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-04-13 01:08:07.716491 | orchestrator | Sunday 13 April 2025 01:05:38 +0000 (0:00:01.874) 0:01:49.790 ********** 2025-04-13 01:08:07.716500 | orchestrator | skipping: [testbed-manager] 2025-04-13 01:08:07.716509 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:07.716517 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:07.716526 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:07.716534 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:08:07.716543 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:08:07.716551 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:08:07.716560 | orchestrator | 2025-04-13 01:08:07.716568 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-04-13 01:08:07.716577 | orchestrator | Sunday 13 April 2025 01:05:39 +0000 (0:00:01.012) 0:01:50.802 ********** 2025-04-13 01:08:07.716586 | orchestrator | skipping: [testbed-manager] 2025-04-13 01:08:07.716594 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:07.716603 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:07.716611 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:07.716620 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:08:07.716628 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:08:07.716637 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:08:07.716645 | orchestrator | 2025-04-13 01:08:07.716654 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-04-13 01:08:07.716663 | orchestrator | Sunday 13 April 2025 01:05:40 +0000 (0:00:00.846) 0:01:51.648 ********** 2025-04-13 01:08:07.716671 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-13 01:08:07.716680 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:07.716696 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-13 01:08:07.716705 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:07.716713 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-13 01:08:07.716722 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:07.716731 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-13 01:08:07.716739 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:08:07.716748 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-13 01:08:07.716757 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:08:07.716765 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-13 01:08:07.716774 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:08:07.716783 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-13 01:08:07.716791 | orchestrator | skipping: [testbed-manager] 2025-04-13 01:08:07.716800 | orchestrator | 2025-04-13 01:08:07.716809 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-04-13 01:08:07.716817 | orchestrator | Sunday 13 April 2025 01:05:43 +0000 (0:00:03.257) 0:01:54.906 ********** 2025-04-13 01:08:07.716826 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-13 01:08:07.716835 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:07.716844 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-13 01:08:07.716857 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:07.716866 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-13 01:08:07.716875 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:07.716883 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-13 01:08:07.716892 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:08:07.716901 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-13 01:08:07.716909 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:08:07.716918 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-13 01:08:07.716927 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:08:07.716935 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-13 01:08:07.716944 | orchestrator | skipping: [testbed-manager] 2025-04-13 01:08:07.716953 | orchestrator | 2025-04-13 01:08:07.716961 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-04-13 01:08:07.716970 | orchestrator | Sunday 13 April 2025 01:05:46 +0000 (0:00:02.929) 0:01:57.835 ********** 2025-04-13 01:08:07.716979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-13 01:08:07.716990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-13 01:08:07.717017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-13 01:08:07.717027 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-13 01:08:07.717041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-13 01:08:07.717051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-13 01:08:07.717059 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-13 01:08:07.717072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.717090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.717104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.717114 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.717140 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717150 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717160 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.717169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717200 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.717214 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717232 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-13 01:08:07.717241 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717250 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.717281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.717295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.717304 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.717313 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-13 01:08:07.717327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 01:08:07.717337 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717349 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717367 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.717376 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-13 01:08:07.717386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 01:08:07.717395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717420 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.717439 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.717449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-13 01:08:07.717458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 01:08:07.717467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.717511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.717520 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.717529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.717555 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.717564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717574 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-13 01:08:07.717592 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 01:08:07.717601 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717610 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.717627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.717636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-13 01:08:07.717666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 01:08:07.717682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.717692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-13 01:08:07.717701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 01:08:07.717715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-13 01:08:07.717729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-13 01:08:07.717746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-13 01:08:07.717755 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.717764 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.717773 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.717801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.717827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.717845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.717868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-13 01:08:07.717890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-13 01:08:07.717916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-13 01:08:07.717924 | orchestrator | 2025-04-13 01:08:07.717933 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-04-13 01:08:07.717942 | orchestrator | Sunday 13 April 2025 01:05:51 +0000 (0:00:05.133) 0:02:02.969 ********** 2025-04-13 01:08:07.717950 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-04-13 01:08:07.717959 | orchestrator | 2025-04-13 01:08:07.717968 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-13 01:08:07.717976 | orchestrator | Sunday 13 April 2025 01:05:54 +0000 (0:00:02.977) 0:02:05.946 ********** 2025-04-13 01:08:07.717985 | orchestrator | 2025-04-13 01:08:07.717993 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-13 01:08:07.718002 | orchestrator | Sunday 13 April 2025 01:05:54 +0000 (0:00:00.064) 0:02:06.010 ********** 2025-04-13 01:08:07.718010 | orchestrator | 2025-04-13 01:08:07.718048 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-13 01:08:07.718057 | orchestrator | Sunday 13 April 2025 01:05:55 +0000 (0:00:00.248) 0:02:06.259 ********** 2025-04-13 01:08:07.718066 | orchestrator | 2025-04-13 01:08:07.718078 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-13 01:08:07.718092 | orchestrator | Sunday 13 April 2025 01:05:55 +0000 (0:00:00.058) 0:02:06.318 ********** 2025-04-13 01:08:07.718101 | orchestrator | 2025-04-13 01:08:07.718109 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-13 01:08:07.718118 | orchestrator | Sunday 13 April 2025 01:05:55 +0000 (0:00:00.065) 0:02:06.383 ********** 2025-04-13 01:08:07.718170 | orchestrator | 2025-04-13 01:08:07.718180 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-13 01:08:07.718189 | orchestrator | Sunday 13 April 2025 01:05:55 +0000 (0:00:00.062) 0:02:06.446 ********** 2025-04-13 01:08:07.718197 | orchestrator | 2025-04-13 01:08:07.718206 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-13 01:08:07.718214 | orchestrator | Sunday 13 April 2025 01:05:55 +0000 (0:00:00.270) 0:02:06.717 ********** 2025-04-13 01:08:07.718222 | orchestrator | 2025-04-13 01:08:07.718231 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-04-13 01:08:07.718239 | orchestrator | Sunday 13 April 2025 01:05:55 +0000 (0:00:00.073) 0:02:06.790 ********** 2025-04-13 01:08:07.718248 | orchestrator | changed: [testbed-manager] 2025-04-13 01:08:07.718256 | orchestrator | 2025-04-13 01:08:07.718264 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-04-13 01:08:07.718273 | orchestrator | Sunday 13 April 2025 01:06:13 +0000 (0:00:17.354) 0:02:24.145 ********** 2025-04-13 01:08:07.718281 | orchestrator | changed: [testbed-node-3] 2025-04-13 01:08:07.718290 | orchestrator | changed: [testbed-node-4] 2025-04-13 01:08:07.718298 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:08:07.718307 | orchestrator | changed: [testbed-manager] 2025-04-13 01:08:07.718315 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:08:07.718324 | orchestrator | changed: [testbed-node-5] 2025-04-13 01:08:07.718333 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:08:07.718345 | orchestrator | 2025-04-13 01:08:07.718354 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-04-13 01:08:07.718362 | orchestrator | Sunday 13 April 2025 01:06:34 +0000 (0:00:21.586) 0:02:45.731 ********** 2025-04-13 01:08:07.718371 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:08:07.718379 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:08:07.718387 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:08:07.718396 | orchestrator | 2025-04-13 01:08:07.718404 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-04-13 01:08:07.718413 | orchestrator | Sunday 13 April 2025 01:06:44 +0000 (0:00:09.944) 0:02:55.676 ********** 2025-04-13 01:08:07.718421 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:08:07.718430 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:08:07.718438 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:08:07.718447 | orchestrator | 2025-04-13 01:08:07.718456 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-04-13 01:08:07.718471 | orchestrator | Sunday 13 April 2025 01:06:58 +0000 (0:00:14.145) 0:03:09.821 ********** 2025-04-13 01:08:07.718485 | orchestrator | changed: [testbed-node-3] 2025-04-13 01:08:07.718505 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:08:07.718518 | orchestrator | changed: [testbed-manager] 2025-04-13 01:08:07.718531 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:08:07.718544 | orchestrator | changed: [testbed-node-4] 2025-04-13 01:08:07.718556 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:08:07.718568 | orchestrator | changed: [testbed-node-5] 2025-04-13 01:08:07.718582 | orchestrator | 2025-04-13 01:08:07.718594 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-04-13 01:08:07.718607 | orchestrator | Sunday 13 April 2025 01:07:16 +0000 (0:00:17.404) 0:03:27.225 ********** 2025-04-13 01:08:07.718619 | orchestrator | changed: [testbed-manager] 2025-04-13 01:08:07.718632 | orchestrator | 2025-04-13 01:08:07.718646 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-04-13 01:08:07.718659 | orchestrator | Sunday 13 April 2025 01:07:26 +0000 (0:00:10.437) 0:03:37.662 ********** 2025-04-13 01:08:07.718668 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:08:07.718683 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:08:07.718691 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:08:07.718699 | orchestrator | 2025-04-13 01:08:07.718707 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-04-13 01:08:07.718715 | orchestrator | Sunday 13 April 2025 01:07:40 +0000 (0:00:13.820) 0:03:51.483 ********** 2025-04-13 01:08:07.718723 | orchestrator | changed: [testbed-manager] 2025-04-13 01:08:07.718733 | orchestrator | 2025-04-13 01:08:07.718746 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-04-13 01:08:07.718759 | orchestrator | Sunday 13 April 2025 01:07:54 +0000 (0:00:13.998) 0:04:05.481 ********** 2025-04-13 01:08:07.718772 | orchestrator | changed: [testbed-node-5] 2025-04-13 01:08:07.718784 | orchestrator | changed: [testbed-node-3] 2025-04-13 01:08:07.718796 | orchestrator | changed: [testbed-node-4] 2025-04-13 01:08:07.718808 | orchestrator | 2025-04-13 01:08:07.718820 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 01:08:07.718834 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-04-13 01:08:07.718848 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-04-13 01:08:07.718862 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-04-13 01:08:07.718876 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-04-13 01:08:07.718890 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-04-13 01:08:07.718904 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-04-13 01:08:07.718918 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-04-13 01:08:07.718931 | orchestrator | 2025-04-13 01:08:07.718945 | orchestrator | 2025-04-13 01:08:07.718958 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 01:08:07.718971 | orchestrator | Sunday 13 April 2025 01:08:06 +0000 (0:00:11.937) 0:04:17.419 ********** 2025-04-13 01:08:07.718982 | orchestrator | =============================================================================== 2025-04-13 01:08:07.718990 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 38.27s 2025-04-13 01:08:07.719003 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 21.59s 2025-04-13 01:08:07.719011 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 17.40s 2025-04-13 01:08:07.719019 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 17.35s 2025-04-13 01:08:07.719027 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.81s 2025-04-13 01:08:07.719034 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 14.15s 2025-04-13 01:08:07.719042 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 14.00s 2025-04-13 01:08:07.719050 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 13.82s 2025-04-13 01:08:07.719058 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.94s 2025-04-13 01:08:07.719066 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 10.44s 2025-04-13 01:08:07.719074 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 9.94s 2025-04-13 01:08:07.719082 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.48s 2025-04-13 01:08:07.719095 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 6.67s 2025-04-13 01:08:07.719103 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.51s 2025-04-13 01:08:07.719111 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.13s 2025-04-13 01:08:07.719119 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 4.20s 2025-04-13 01:08:07.719173 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 4.17s 2025-04-13 01:08:07.719187 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.97s 2025-04-13 01:08:07.719204 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 3.62s 2025-04-13 01:08:10.770603 | orchestrator | prometheus : Copying over prometheus msteams config file ---------------- 3.26s 2025-04-13 01:08:10.770725 | orchestrator | 2025-04-13 01:08:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:08:10.770744 | orchestrator | 2025-04-13 01:08:07 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:08:10.770759 | orchestrator | 2025-04-13 01:08:07 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:08:10.770773 | orchestrator | 2025-04-13 01:08:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:08:10.770804 | orchestrator | 2025-04-13 01:08:10 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:08:10.776729 | orchestrator | 2025-04-13 01:08:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:08:10.776933 | orchestrator | 2025-04-13 01:08:10 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:08:10.780506 | orchestrator | 2025-04-13 01:08:10 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:08:10.781258 | orchestrator | 2025-04-13 01:08:10 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:08:13.836722 | orchestrator | 2025-04-13 01:08:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:08:13.836870 | orchestrator | 2025-04-13 01:08:13 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:08:13.837442 | orchestrator | 2025-04-13 01:08:13 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:08:13.838552 | orchestrator | 2025-04-13 01:08:13 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:08:13.840091 | orchestrator | 2025-04-13 01:08:13 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:08:13.841456 | orchestrator | 2025-04-13 01:08:13 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:08:16.880977 | orchestrator | 2025-04-13 01:08:13 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:08:16.881114 | orchestrator | 2025-04-13 01:08:16 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:08:16.881328 | orchestrator | 2025-04-13 01:08:16 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:08:16.882516 | orchestrator | 2025-04-13 01:08:16 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:08:16.883808 | orchestrator | 2025-04-13 01:08:16 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:08:16.888219 | orchestrator | 2025-04-13 01:08:16 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:08:19.925280 | orchestrator | 2025-04-13 01:08:16 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:08:19.925471 | orchestrator | 2025-04-13 01:08:19 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:08:19.925754 | orchestrator | 2025-04-13 01:08:19 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:08:19.925887 | orchestrator | 2025-04-13 01:08:19 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:08:19.926426 | orchestrator | 2025-04-13 01:08:19 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:08:19.927975 | orchestrator | 2025-04-13 01:08:19 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:08:22.986674 | orchestrator | 2025-04-13 01:08:19 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:08:22.986819 | orchestrator | 2025-04-13 01:08:22 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:08:22.987763 | orchestrator | 2025-04-13 01:08:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:08:22.991231 | orchestrator | 2025-04-13 01:08:22 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:08:22.995886 | orchestrator | 2025-04-13 01:08:22 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:08:22.996723 | orchestrator | 2025-04-13 01:08:22 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:08:22.996826 | orchestrator | 2025-04-13 01:08:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:08:26.050302 | orchestrator | 2025-04-13 01:08:26 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:08:26.054605 | orchestrator | 2025-04-13 01:08:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:08:26.055603 | orchestrator | 2025-04-13 01:08:26 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:08:26.055986 | orchestrator | 2025-04-13 01:08:26 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:08:26.056846 | orchestrator | 2025-04-13 01:08:26 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:08:29.098332 | orchestrator | 2025-04-13 01:08:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:08:29.098475 | orchestrator | 2025-04-13 01:08:29 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:08:29.099255 | orchestrator | 2025-04-13 01:08:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:08:29.100826 | orchestrator | 2025-04-13 01:08:29 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:08:29.102543 | orchestrator | 2025-04-13 01:08:29 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:08:29.103848 | orchestrator | 2025-04-13 01:08:29 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:08:32.146889 | orchestrator | 2025-04-13 01:08:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:08:32.147040 | orchestrator | 2025-04-13 01:08:32 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:08:32.149281 | orchestrator | 2025-04-13 01:08:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:08:32.151504 | orchestrator | 2025-04-13 01:08:32 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:08:32.153425 | orchestrator | 2025-04-13 01:08:32 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:08:32.154534 | orchestrator | 2025-04-13 01:08:32 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:08:32.154933 | orchestrator | 2025-04-13 01:08:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:08:35.213428 | orchestrator | 2025-04-13 01:08:35 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state STARTED 2025-04-13 01:08:35.215578 | orchestrator | 2025-04-13 01:08:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:08:35.218012 | orchestrator | 2025-04-13 01:08:35 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:08:35.220985 | orchestrator | 2025-04-13 01:08:35 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:08:35.222386 | orchestrator | 2025-04-13 01:08:35 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:08:35.222523 | orchestrator | 2025-04-13 01:08:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:08:38.255508 | orchestrator | 2025-04-13 01:08:38.255629 | orchestrator | 2025-04-13 01:08:38.255648 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 01:08:38.255664 | orchestrator | 2025-04-13 01:08:38.255679 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 01:08:38.255693 | orchestrator | Sunday 13 April 2025 01:05:36 +0000 (0:00:00.308) 0:00:00.308 ********** 2025-04-13 01:08:38.255707 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:08:38.255722 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:08:38.255736 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:08:38.255750 | orchestrator | ok: [testbed-node-3] 2025-04-13 01:08:38.255764 | orchestrator | ok: [testbed-node-4] 2025-04-13 01:08:38.255778 | orchestrator | ok: [testbed-node-5] 2025-04-13 01:08:38.255792 | orchestrator | 2025-04-13 01:08:38.255806 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 01:08:38.255820 | orchestrator | Sunday 13 April 2025 01:05:37 +0000 (0:00:00.671) 0:00:00.979 ********** 2025-04-13 01:08:38.255834 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-04-13 01:08:38.255848 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-04-13 01:08:38.256372 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-04-13 01:08:38.256389 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-04-13 01:08:38.256402 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-04-13 01:08:38.256575 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-04-13 01:08:38.256591 | orchestrator | 2025-04-13 01:08:38.256606 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-04-13 01:08:38.256619 | orchestrator | 2025-04-13 01:08:38.256633 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-04-13 01:08:38.256647 | orchestrator | Sunday 13 April 2025 01:05:37 +0000 (0:00:00.931) 0:00:01.911 ********** 2025-04-13 01:08:38.256661 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 01:08:38.256677 | orchestrator | 2025-04-13 01:08:38.256692 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-04-13 01:08:38.257205 | orchestrator | Sunday 13 April 2025 01:05:39 +0000 (0:00:01.357) 0:00:03.268 ********** 2025-04-13 01:08:38.257229 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-04-13 01:08:38.257244 | orchestrator | 2025-04-13 01:08:38.257258 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-04-13 01:08:38.257272 | orchestrator | Sunday 13 April 2025 01:05:42 +0000 (0:00:03.450) 0:00:06.718 ********** 2025-04-13 01:08:38.257287 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-04-13 01:08:38.257302 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-04-13 01:08:38.257907 | orchestrator | 2025-04-13 01:08:38.257926 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-04-13 01:08:38.257956 | orchestrator | Sunday 13 April 2025 01:05:49 +0000 (0:00:06.677) 0:00:13.396 ********** 2025-04-13 01:08:38.257971 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-13 01:08:38.257985 | orchestrator | 2025-04-13 01:08:38.257999 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-04-13 01:08:38.258013 | orchestrator | Sunday 13 April 2025 01:05:52 +0000 (0:00:03.416) 0:00:16.812 ********** 2025-04-13 01:08:38.258092 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-13 01:08:38.258106 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-04-13 01:08:38.258120 | orchestrator | 2025-04-13 01:08:38.258225 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-04-13 01:08:38.258250 | orchestrator | Sunday 13 April 2025 01:05:57 +0000 (0:00:04.133) 0:00:20.945 ********** 2025-04-13 01:08:38.258274 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-13 01:08:38.258298 | orchestrator | 2025-04-13 01:08:38.258320 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-04-13 01:08:38.258341 | orchestrator | Sunday 13 April 2025 01:06:00 +0000 (0:00:03.210) 0:00:24.156 ********** 2025-04-13 01:08:38.258355 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-04-13 01:08:38.258369 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-04-13 01:08:38.258382 | orchestrator | 2025-04-13 01:08:38.258396 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-04-13 01:08:38.258410 | orchestrator | Sunday 13 April 2025 01:06:08 +0000 (0:00:08.582) 0:00:32.738 ********** 2025-04-13 01:08:38.258487 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.258511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.258528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.258559 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.258612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-13 01:08:38.258631 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.258685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-13 01:08:38.258703 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.258728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-13 01:08:38.258744 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.258771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.258819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.258836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.258860 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.258886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.258902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.258969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.258999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.259036 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.259076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.259101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.259124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.259530 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.259552 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.259582 | orchestrator | 2025-04-13 01:08:38.259598 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-04-13 01:08:38.259613 | orchestrator | Sunday 13 April 2025 01:06:11 +0000 (0:00:02.912) 0:00:35.651 ********** 2025-04-13 01:08:38.259627 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:38.259642 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:38.259656 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:38.259670 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 01:08:38.259684 | orchestrator | 2025-04-13 01:08:38.259698 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-04-13 01:08:38.259712 | orchestrator | Sunday 13 April 2025 01:06:13 +0000 (0:00:01.432) 0:00:37.084 ********** 2025-04-13 01:08:38.259725 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-04-13 01:08:38.259739 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-04-13 01:08:38.259753 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-04-13 01:08:38.259767 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-04-13 01:08:38.259781 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-04-13 01:08:38.259795 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-04-13 01:08:38.259808 | orchestrator | 2025-04-13 01:08:38.259822 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-04-13 01:08:38.259835 | orchestrator | Sunday 13 April 2025 01:06:18 +0000 (0:00:05.518) 0:00:42.602 ********** 2025-04-13 01:08:38.259850 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-13 01:08:38.259867 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-13 01:08:38.260009 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-13 01:08:38.260044 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-13 01:08:38.260061 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-13 01:08:38.260079 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-13 01:08:38.260096 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-13 01:08:38.260228 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-13 01:08:38.260276 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-13 01:08:38.260302 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-13 01:08:38.260326 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-13 01:08:38.260397 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': 2025-04-13 01:08:38 | INFO  | Task ecb41f12-3fc9-4436-b87b-796cc3631460 is in state SUCCESS 2025-04-13 01:08:38.260550 | orchestrator | True}]) 2025-04-13 01:08:38.260578 | orchestrator | 2025-04-13 01:08:38.260600 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-04-13 01:08:38.260621 | orchestrator | Sunday 13 April 2025 01:06:24 +0000 (0:00:06.082) 0:00:48.684 ********** 2025-04-13 01:08:38.260642 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-04-13 01:08:38.260674 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-04-13 01:08:38.260695 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-04-13 01:08:38.260714 | orchestrator | 2025-04-13 01:08:38.260734 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-04-13 01:08:38.260755 | orchestrator | Sunday 13 April 2025 01:06:27 +0000 (0:00:02.542) 0:00:51.227 ********** 2025-04-13 01:08:38.260775 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-04-13 01:08:38.260796 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-04-13 01:08:38.260817 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-04-13 01:08:38.260837 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-04-13 01:08:38.260849 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-04-13 01:08:38.260862 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-04-13 01:08:38.260874 | orchestrator | 2025-04-13 01:08:38.260887 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-04-13 01:08:38.260899 | orchestrator | Sunday 13 April 2025 01:06:30 +0000 (0:00:03.302) 0:00:54.529 ********** 2025-04-13 01:08:38.260911 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-04-13 01:08:38.260924 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-04-13 01:08:38.260936 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-04-13 01:08:38.260948 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-04-13 01:08:38.260961 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-04-13 01:08:38.260973 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-04-13 01:08:38.260985 | orchestrator | 2025-04-13 01:08:38.260997 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-04-13 01:08:38.261009 | orchestrator | Sunday 13 April 2025 01:06:31 +0000 (0:00:01.273) 0:00:55.803 ********** 2025-04-13 01:08:38.261021 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:38.261034 | orchestrator | 2025-04-13 01:08:38.261046 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-04-13 01:08:38.261059 | orchestrator | Sunday 13 April 2025 01:06:31 +0000 (0:00:00.100) 0:00:55.904 ********** 2025-04-13 01:08:38.261071 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:38.261083 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:38.261095 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:38.261108 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:08:38.261120 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:08:38.261155 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:08:38.261168 | orchestrator | 2025-04-13 01:08:38.261183 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-04-13 01:08:38.261197 | orchestrator | Sunday 13 April 2025 01:06:32 +0000 (0:00:00.629) 0:00:56.533 ********** 2025-04-13 01:08:38.261212 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 01:08:38.261240 | orchestrator | 2025-04-13 01:08:38.261255 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-04-13 01:08:38.261269 | orchestrator | Sunday 13 April 2025 01:06:33 +0000 (0:00:01.361) 0:00:57.895 ********** 2025-04-13 01:08:38.261283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-13 01:08:38.261417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-13 01:08:38.261439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.261455 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.261470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-13 01:08:38.261501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.261548 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.261564 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.261578 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.261601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.261623 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.261636 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.261649 | orchestrator | 2025-04-13 01:08:38.261661 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-04-13 01:08:38.261675 | orchestrator | Sunday 13 April 2025 01:06:38 +0000 (0:00:04.137) 0:01:02.033 ********** 2025-04-13 01:08:38.261716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.261731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.261744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.261764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.261777 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:38.261803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.261843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.261858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.261872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.261902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.261917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.261930 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:38.261942 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:38.261954 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:08:38.261967 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:08:38.262007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.262081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.262097 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:08:38.262109 | orchestrator | 2025-04-13 01:08:38.262122 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-04-13 01:08:38.262154 | orchestrator | Sunday 13 April 2025 01:06:40 +0000 (0:00:02.198) 0:01:04.231 ********** 2025-04-13 01:08:38.262167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.262205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.262219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.262269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.262284 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:38.262297 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:38.262310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.262329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.262343 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:38.262367 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.262381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.262394 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:08:38.262436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.262451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.262478 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:08:38.262515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.262529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.262542 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:08:38.262555 | orchestrator | 2025-04-13 01:08:38.262567 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-04-13 01:08:38.262580 | orchestrator | Sunday 13 April 2025 01:06:42 +0000 (0:00:02.441) 0:01:06.672 ********** 2025-04-13 01:08:38.262592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.262635 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.262651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.262671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.262695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-13 01:08:38.262709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-13 01:08:38.262758 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.262782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.262810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-13 01:08:38.262837 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.262851 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.262895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.262924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.262958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.262979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.263000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.263020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.263105 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.263156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.263171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.263185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.263199 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.263242 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.263278 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.263292 | orchestrator | 2025-04-13 01:08:38.263304 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-04-13 01:08:38.263317 | orchestrator | Sunday 13 April 2025 01:06:46 +0000 (0:00:03.883) 0:01:10.556 ********** 2025-04-13 01:08:38.263329 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-04-13 01:08:38.263342 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:08:38.263354 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-04-13 01:08:38.263367 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:08:38.263389 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-04-13 01:08:38.263402 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:08:38.263414 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-04-13 01:08:38.263427 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-04-13 01:08:38.263439 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-04-13 01:08:38.263451 | orchestrator | 2025-04-13 01:08:38.263464 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-04-13 01:08:38.263476 | orchestrator | Sunday 13 April 2025 01:06:49 +0000 (0:00:03.091) 0:01:13.648 ********** 2025-04-13 01:08:38.263489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.263502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.263521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.263541 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.263554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.263578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.263592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-13 01:08:38.263614 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.263634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-13 01:08:38.263658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-13 01:08:38.263672 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.263685 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.263720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.263735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.263747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.263760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.263773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.263795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.263821 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.263835 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.263848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.263860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.263873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.263907 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.263921 | orchestrator | 2025-04-13 01:08:38.263933 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-04-13 01:08:38.263946 | orchestrator | Sunday 13 April 2025 01:06:59 +0000 (0:00:10.136) 0:01:23.784 ********** 2025-04-13 01:08:38.263958 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:38.263971 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:38.263983 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:38.263996 | orchestrator | changed: [testbed-node-3] 2025-04-13 01:08:38.264008 | orchestrator | changed: [testbed-node-4] 2025-04-13 01:08:38.264020 | orchestrator | changed: [testbed-node-5] 2025-04-13 01:08:38.264032 | orchestrator | 2025-04-13 01:08:38.264045 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-04-13 01:08:38.264057 | orchestrator | Sunday 13 April 2025 01:07:03 +0000 (0:00:04.111) 0:01:27.896 ********** 2025-04-13 01:08:38.264070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.264083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.264218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264265 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:38.264278 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:38.264291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.264310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264332 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264354 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:08:38.264364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.264383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264429 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:38.264440 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.264450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.264467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264477 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264540 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:08:38.264558 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264569 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:08:38.264580 | orchestrator | 2025-04-13 01:08:38.264590 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-04-13 01:08:38.264600 | orchestrator | Sunday 13 April 2025 01:07:06 +0000 (0:00:02.574) 0:01:30.470 ********** 2025-04-13 01:08:38.264610 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:38.264620 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:38.264630 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:38.264640 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:08:38.264650 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:08:38.264660 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:08:38.264670 | orchestrator | 2025-04-13 01:08:38.264680 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-04-13 01:08:38.264690 | orchestrator | Sunday 13 April 2025 01:07:07 +0000 (0:00:01.395) 0:01:31.865 ********** 2025-04-13 01:08:38.264707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.264718 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-13 01:08:38.264745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.264755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-13 01:08:38.264789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-13 01:08:38.264800 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.264816 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-13 01:08:38.264827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264850 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.264862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.264873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.264888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.264961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-13 01:08:38.264989 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.265005 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.265016 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.265027 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-13 01:08:38.265043 | orchestrator | 2025-04-13 01:08:38.265053 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-04-13 01:08:38.265063 | orchestrator | Sunday 13 April 2025 01:07:11 +0000 (0:00:03.390) 0:01:35.255 ********** 2025-04-13 01:08:38.265074 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:38.265084 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:38.265093 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:38.265103 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:08:38.265113 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:08:38.265123 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:08:38.265148 | orchestrator | 2025-04-13 01:08:38.265158 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-04-13 01:08:38.265168 | orchestrator | Sunday 13 April 2025 01:07:12 +0000 (0:00:00.924) 0:01:36.179 ********** 2025-04-13 01:08:38.265178 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:08:38.265188 | orchestrator | 2025-04-13 01:08:38.265198 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-04-13 01:08:38.265208 | orchestrator | Sunday 13 April 2025 01:07:14 +0000 (0:00:02.489) 0:01:38.668 ********** 2025-04-13 01:08:38.265218 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:08:38.265228 | orchestrator | 2025-04-13 01:08:38.265238 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-04-13 01:08:38.265248 | orchestrator | Sunday 13 April 2025 01:07:17 +0000 (0:00:02.394) 0:01:41.063 ********** 2025-04-13 01:08:38.265258 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:08:38.265268 | orchestrator | 2025-04-13 01:08:38.265278 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-13 01:08:38.265287 | orchestrator | Sunday 13 April 2025 01:07:34 +0000 (0:00:17.594) 0:01:58.657 ********** 2025-04-13 01:08:38.265297 | orchestrator | 2025-04-13 01:08:38.265308 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-13 01:08:38.265325 | orchestrator | Sunday 13 April 2025 01:07:34 +0000 (0:00:00.044) 0:01:58.702 ********** 2025-04-13 01:08:38.265342 | orchestrator | 2025-04-13 01:08:38.265359 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-13 01:08:38.265375 | orchestrator | Sunday 13 April 2025 01:07:34 +0000 (0:00:00.136) 0:01:58.838 ********** 2025-04-13 01:08:38.265391 | orchestrator | 2025-04-13 01:08:38.265419 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-13 01:08:38.265437 | orchestrator | Sunday 13 April 2025 01:07:34 +0000 (0:00:00.040) 0:01:58.878 ********** 2025-04-13 01:08:38.265453 | orchestrator | 2025-04-13 01:08:38.265468 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-13 01:08:38.265478 | orchestrator | Sunday 13 April 2025 01:07:34 +0000 (0:00:00.040) 0:01:58.918 ********** 2025-04-13 01:08:38.265488 | orchestrator | 2025-04-13 01:08:38.265498 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-13 01:08:38.265508 | orchestrator | Sunday 13 April 2025 01:07:35 +0000 (0:00:00.040) 0:01:58.959 ********** 2025-04-13 01:08:38.265518 | orchestrator | 2025-04-13 01:08:38.265528 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-04-13 01:08:38.265538 | orchestrator | Sunday 13 April 2025 01:07:35 +0000 (0:00:00.157) 0:01:59.116 ********** 2025-04-13 01:08:38.265548 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:08:38.265565 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:08:38.265575 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:08:38.265585 | orchestrator | 2025-04-13 01:08:38.265595 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-04-13 01:08:38.265605 | orchestrator | Sunday 13 April 2025 01:07:52 +0000 (0:00:16.944) 0:02:16.061 ********** 2025-04-13 01:08:38.265615 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:08:38.265625 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:08:38.265640 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:08:38.265651 | orchestrator | 2025-04-13 01:08:38.265661 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-04-13 01:08:38.265671 | orchestrator | Sunday 13 April 2025 01:08:02 +0000 (0:00:10.580) 0:02:26.641 ********** 2025-04-13 01:08:38.265681 | orchestrator | changed: [testbed-node-3] 2025-04-13 01:08:38.265691 | orchestrator | changed: [testbed-node-5] 2025-04-13 01:08:38.265701 | orchestrator | changed: [testbed-node-4] 2025-04-13 01:08:38.265711 | orchestrator | 2025-04-13 01:08:38.265721 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-04-13 01:08:38.265731 | orchestrator | Sunday 13 April 2025 01:08:25 +0000 (0:00:23.036) 0:02:49.678 ********** 2025-04-13 01:08:38.265741 | orchestrator | changed: [testbed-node-5] 2025-04-13 01:08:38.265751 | orchestrator | changed: [testbed-node-4] 2025-04-13 01:08:38.265761 | orchestrator | changed: [testbed-node-3] 2025-04-13 01:08:38.265771 | orchestrator | 2025-04-13 01:08:38.265781 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-04-13 01:08:38.265791 | orchestrator | Sunday 13 April 2025 01:08:36 +0000 (0:00:11.106) 0:03:00.785 ********** 2025-04-13 01:08:38.265801 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:38.265811 | orchestrator | 2025-04-13 01:08:38.265821 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 01:08:38.265831 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-04-13 01:08:38.265842 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-04-13 01:08:38.265853 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-04-13 01:08:38.265863 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-13 01:08:38.265873 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-13 01:08:38.265883 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-13 01:08:38.265893 | orchestrator | 2025-04-13 01:08:38.265903 | orchestrator | 2025-04-13 01:08:38.265913 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 01:08:38.265923 | orchestrator | Sunday 13 April 2025 01:08:37 +0000 (0:00:00.501) 0:03:01.286 ********** 2025-04-13 01:08:38.265933 | orchestrator | =============================================================================== 2025-04-13 01:08:38.265943 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 23.04s 2025-04-13 01:08:38.265953 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.59s 2025-04-13 01:08:38.265963 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 16.94s 2025-04-13 01:08:38.265973 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.11s 2025-04-13 01:08:38.265983 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.58s 2025-04-13 01:08:38.265993 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.14s 2025-04-13 01:08:38.266008 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.58s 2025-04-13 01:08:38.266046 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.68s 2025-04-13 01:08:38.266058 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 6.08s 2025-04-13 01:08:38.266068 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 5.52s 2025-04-13 01:08:38.266078 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.14s 2025-04-13 01:08:38.266093 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.13s 2025-04-13 01:08:38.266103 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 4.11s 2025-04-13 01:08:38.266113 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.88s 2025-04-13 01:08:38.266123 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.45s 2025-04-13 01:08:38.266160 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.42s 2025-04-13 01:08:38.266171 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.39s 2025-04-13 01:08:38.266180 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.30s 2025-04-13 01:08:38.266190 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.21s 2025-04-13 01:08:38.266200 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 3.09s 2025-04-13 01:08:38.266210 | orchestrator | 2025-04-13 01:08:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:08:38.266221 | orchestrator | 2025-04-13 01:08:38 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:08:38.266231 | orchestrator | 2025-04-13 01:08:38 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:08:38.266246 | orchestrator | 2025-04-13 01:08:38 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:08:41.294333 | orchestrator | 2025-04-13 01:08:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:08:41.294466 | orchestrator | 2025-04-13 01:08:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:08:41.295515 | orchestrator | 2025-04-13 01:08:41 | INFO  | Task 69f096a8-4a67-4065-9848-1b46d6ddf0ce is in state STARTED 2025-04-13 01:08:41.296402 | orchestrator | 2025-04-13 01:08:41 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:08:41.297148 | orchestrator | 2025-04-13 01:08:41 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:08:41.298001 | orchestrator | 2025-04-13 01:08:41 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:08:44.341581 | orchestrator | 2025-04-13 01:08:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:08:44.341725 | orchestrator | 2025-04-13 01:08:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:08:44.342777 | orchestrator | 2025-04-13 01:08:44 | INFO  | Task 69f096a8-4a67-4065-9848-1b46d6ddf0ce is in state STARTED 2025-04-13 01:08:44.342907 | orchestrator | 2025-04-13 01:08:44 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:08:44.343457 | orchestrator | 2025-04-13 01:08:44 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:08:44.344308 | orchestrator | 2025-04-13 01:08:44 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:08:47.389574 | orchestrator | 2025-04-13 01:08:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:08:47.389707 | orchestrator | 2025-04-13 01:08:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:08:47.397389 | orchestrator | 2025-04-13 01:08:47 | INFO  | Task 69f096a8-4a67-4065-9848-1b46d6ddf0ce is in state STARTED 2025-04-13 01:08:47.399039 | orchestrator | 2025-04-13 01:08:47 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:08:47.400676 | orchestrator | 2025-04-13 01:08:47 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:08:47.402793 | orchestrator | 2025-04-13 01:08:47 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state STARTED 2025-04-13 01:08:50.455477 | orchestrator | 2025-04-13 01:08:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:08:50.455649 | orchestrator | 2025-04-13 01:08:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:08:50.457093 | orchestrator | 2025-04-13 01:08:50 | INFO  | Task 69f096a8-4a67-4065-9848-1b46d6ddf0ce is in state STARTED 2025-04-13 01:08:50.458927 | orchestrator | 2025-04-13 01:08:50 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:08:50.459618 | orchestrator | 2025-04-13 01:08:50 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:08:50.460863 | orchestrator | 2025-04-13 01:08:50 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:08:50.463082 | orchestrator | 2025-04-13 01:08:50 | INFO  | Task 23c9db85-9e48-4fe8-816c-4df613fce759 is in state SUCCESS 2025-04-13 01:08:50.464937 | orchestrator | 2025-04-13 01:08:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:08:50.465158 | orchestrator | 2025-04-13 01:08:50.465569 | orchestrator | 2025-04-13 01:08:50.465623 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 01:08:50.465649 | orchestrator | 2025-04-13 01:08:50.465672 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 01:08:50.465697 | orchestrator | Sunday 13 April 2025 01:05:28 +0000 (0:00:00.310) 0:00:00.310 ********** 2025-04-13 01:08:50.465721 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:08:50.465738 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:08:50.465752 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:08:50.465766 | orchestrator | 2025-04-13 01:08:50.465780 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 01:08:50.465798 | orchestrator | Sunday 13 April 2025 01:05:28 +0000 (0:00:00.399) 0:00:00.710 ********** 2025-04-13 01:08:50.465822 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-04-13 01:08:50.465847 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-04-13 01:08:50.465869 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-04-13 01:08:50.465891 | orchestrator | 2025-04-13 01:08:50.465915 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-04-13 01:08:50.465938 | orchestrator | 2025-04-13 01:08:50.465960 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-04-13 01:08:50.465982 | orchestrator | Sunday 13 April 2025 01:05:29 +0000 (0:00:00.315) 0:00:01.025 ********** 2025-04-13 01:08:50.466005 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:08:50.466110 | orchestrator | 2025-04-13 01:08:50.466194 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-04-13 01:08:50.466218 | orchestrator | Sunday 13 April 2025 01:05:30 +0000 (0:00:01.198) 0:00:02.223 ********** 2025-04-13 01:08:50.466235 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-04-13 01:08:50.466251 | orchestrator | 2025-04-13 01:08:50.466267 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-04-13 01:08:50.466282 | orchestrator | Sunday 13 April 2025 01:05:34 +0000 (0:00:03.841) 0:00:06.065 ********** 2025-04-13 01:08:50.466325 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-04-13 01:08:50.466342 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-04-13 01:08:50.466358 | orchestrator | 2025-04-13 01:08:50.466373 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-04-13 01:08:50.466389 | orchestrator | Sunday 13 April 2025 01:05:41 +0000 (0:00:06.876) 0:00:12.942 ********** 2025-04-13 01:08:50.466405 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-13 01:08:50.466424 | orchestrator | 2025-04-13 01:08:50.466449 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-04-13 01:08:50.466472 | orchestrator | Sunday 13 April 2025 01:05:44 +0000 (0:00:03.768) 0:00:16.710 ********** 2025-04-13 01:08:50.466496 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-13 01:08:50.466523 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-04-13 01:08:50.466547 | orchestrator | 2025-04-13 01:08:50.466562 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-04-13 01:08:50.466576 | orchestrator | Sunday 13 April 2025 01:05:48 +0000 (0:00:03.995) 0:00:20.705 ********** 2025-04-13 01:08:50.466590 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-13 01:08:50.466604 | orchestrator | 2025-04-13 01:08:50.466618 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-04-13 01:08:50.466632 | orchestrator | Sunday 13 April 2025 01:05:52 +0000 (0:00:03.530) 0:00:24.235 ********** 2025-04-13 01:08:50.466646 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-04-13 01:08:50.466660 | orchestrator | 2025-04-13 01:08:50.466674 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-04-13 01:08:50.466688 | orchestrator | Sunday 13 April 2025 01:05:56 +0000 (0:00:04.245) 0:00:28.481 ********** 2025-04-13 01:08:50.466725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-13 01:08:50.466745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-13 01:08:50.466772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-13 01:08:50.466813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-13 01:08:50.466838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-13 01:08:50.466863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-13 01:08:50.466887 | orchestrator | 2025-04-13 01:08:50.466902 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-04-13 01:08:50.466916 | orchestrator | Sunday 13 April 2025 01:06:00 +0000 (0:00:03.515) 0:00:31.996 ********** 2025-04-13 01:08:50.466930 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:08:50.466944 | orchestrator | 2025-04-13 01:08:50.466959 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-04-13 01:08:50.466973 | orchestrator | Sunday 13 April 2025 01:06:00 +0000 (0:00:00.591) 0:00:32.588 ********** 2025-04-13 01:08:50.466987 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:08:50.467001 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:08:50.467015 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:08:50.467029 | orchestrator | 2025-04-13 01:08:50.467043 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-04-13 01:08:50.467057 | orchestrator | Sunday 13 April 2025 01:06:10 +0000 (0:00:09.391) 0:00:41.979 ********** 2025-04-13 01:08:50.467071 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-13 01:08:50.467085 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-13 01:08:50.467099 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-13 01:08:50.467113 | orchestrator | 2025-04-13 01:08:50.467151 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-04-13 01:08:50.467165 | orchestrator | Sunday 13 April 2025 01:06:12 +0000 (0:00:02.293) 0:00:44.273 ********** 2025-04-13 01:08:50.467179 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-13 01:08:50.467193 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-13 01:08:50.467207 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-13 01:08:50.467221 | orchestrator | 2025-04-13 01:08:50.467235 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-04-13 01:08:50.467249 | orchestrator | Sunday 13 April 2025 01:06:14 +0000 (0:00:02.033) 0:00:46.306 ********** 2025-04-13 01:08:50.467262 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:08:50.467283 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:08:50.467297 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:08:50.467311 | orchestrator | 2025-04-13 01:08:50.467325 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-04-13 01:08:50.467339 | orchestrator | Sunday 13 April 2025 01:06:16 +0000 (0:00:01.713) 0:00:48.020 ********** 2025-04-13 01:08:50.467353 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:50.467373 | orchestrator | 2025-04-13 01:08:50.467387 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-04-13 01:08:50.467401 | orchestrator | Sunday 13 April 2025 01:06:17 +0000 (0:00:00.767) 0:00:48.793 ********** 2025-04-13 01:08:50.467415 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:50.467429 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:50.467443 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:50.467465 | orchestrator | 2025-04-13 01:08:50.467479 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-04-13 01:08:50.467493 | orchestrator | Sunday 13 April 2025 01:06:17 +0000 (0:00:00.614) 0:00:49.407 ********** 2025-04-13 01:08:50.467507 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:08:50.467521 | orchestrator | 2025-04-13 01:08:50.467535 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-04-13 01:08:50.467550 | orchestrator | Sunday 13 April 2025 01:06:19 +0000 (0:00:01.960) 0:00:51.368 ********** 2025-04-13 01:08:50.467573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-13 01:08:50.467590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-13 01:08:50.467621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-13 01:08:50.467637 | orchestrator | 2025-04-13 01:08:50.467651 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-04-13 01:08:50.467665 | orchestrator | Sunday 13 April 2025 01:06:28 +0000 (0:00:08.472) 0:00:59.841 ********** 2025-04-13 01:08:50.467680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-13 01:08:50.467695 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:50.467717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-13 01:08:50.467739 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:50.467754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-13 01:08:50.467770 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:50.467784 | orchestrator | 2025-04-13 01:08:50.467798 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-04-13 01:08:50.467811 | orchestrator | Sunday 13 April 2025 01:06:32 +0000 (0:00:04.211) 0:01:04.052 ********** 2025-04-13 01:08:50.467833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-13 01:08:50.467858 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:50.467873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-13 01:08:50.467889 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:50.467903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-13 01:08:50.467925 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:50.467939 | orchestrator | 2025-04-13 01:08:50.467953 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-04-13 01:08:50.467967 | orchestrator | Sunday 13 April 2025 01:06:37 +0000 (0:00:04.863) 0:01:08.916 ********** 2025-04-13 01:08:50.467981 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:50.467996 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:50.468010 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:50.468023 | orchestrator | 2025-04-13 01:08:50.468043 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-04-13 01:08:50.468057 | orchestrator | Sunday 13 April 2025 01:06:42 +0000 (0:00:04.999) 0:01:13.915 ********** 2025-04-13 01:08:50.468072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-13 01:08:50.468088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-13 01:08:50.468118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-13 01:08:50.468200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-13 01:08:50.468262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-13 01:08:50.468292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-13 01:08:50.468332 | orchestrator | 2025-04-13 01:08:50.468356 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-04-13 01:08:50.468369 | orchestrator | Sunday 13 April 2025 01:06:48 +0000 (0:00:06.683) 0:01:20.598 ********** 2025-04-13 01:08:50.468381 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:08:50.468394 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:08:50.468406 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:08:50.468418 | orchestrator | 2025-04-13 01:08:50.468431 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-04-13 01:08:50.468443 | orchestrator | Sunday 13 April 2025 01:07:04 +0000 (0:00:15.338) 0:01:35.937 ********** 2025-04-13 01:08:50.468456 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:50.468468 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:50.468480 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:50.468492 | orchestrator | 2025-04-13 01:08:50.468505 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-04-13 01:08:50.468517 | orchestrator | Sunday 13 April 2025 01:07:13 +0000 (0:00:09.644) 0:01:45.581 ********** 2025-04-13 01:08:50.468530 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:50.468542 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:50.468554 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:50.468566 | orchestrator | 2025-04-13 01:08:50.468578 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-04-13 01:08:50.468605 | orchestrator | Sunday 13 April 2025 01:07:20 +0000 (0:00:06.716) 0:01:52.298 ********** 2025-04-13 01:08:50.468626 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:50.468647 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:50.468669 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:50.468689 | orchestrator | 2025-04-13 01:08:50.468710 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-04-13 01:08:50.468723 | orchestrator | Sunday 13 April 2025 01:07:26 +0000 (0:00:06.143) 0:01:58.441 ********** 2025-04-13 01:08:50.468735 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:50.468754 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:50.468767 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:50.468779 | orchestrator | 2025-04-13 01:08:50.468791 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-04-13 01:08:50.468804 | orchestrator | Sunday 13 April 2025 01:07:35 +0000 (0:00:08.573) 0:02:07.014 ********** 2025-04-13 01:08:50.468816 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:50.468829 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:50.468841 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:50.468853 | orchestrator | 2025-04-13 01:08:50.468866 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-04-13 01:08:50.468878 | orchestrator | Sunday 13 April 2025 01:07:35 +0000 (0:00:00.388) 0:02:07.403 ********** 2025-04-13 01:08:50.468890 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-04-13 01:08:50.468903 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:50.468916 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-04-13 01:08:50.468928 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:50.468941 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-04-13 01:08:50.468954 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:50.468966 | orchestrator | 2025-04-13 01:08:50.468978 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-04-13 01:08:50.468999 | orchestrator | Sunday 13 April 2025 01:07:39 +0000 (0:00:03.569) 0:02:10.972 ********** 2025-04-13 01:08:50.469013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-13 01:08:50.469033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-13 01:08:50.469048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-13 01:08:50.469074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-13 01:08:50.469089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-13 01:08:50.469109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-13 01:08:50.469122 | orchestrator | 2025-04-13 01:08:50.469185 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-04-13 01:08:50.469198 | orchestrator | Sunday 13 April 2025 01:07:44 +0000 (0:00:04.918) 0:02:15.890 ********** 2025-04-13 01:08:50.469210 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:08:50.469223 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:08:50.469235 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:08:50.469247 | orchestrator | 2025-04-13 01:08:50.469266 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-04-13 01:08:50.469279 | orchestrator | Sunday 13 April 2025 01:07:44 +0000 (0:00:00.515) 0:02:16.405 ********** 2025-04-13 01:08:50.469291 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:08:50.469303 | orchestrator | 2025-04-13 01:08:50.469316 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-04-13 01:08:50.469328 | orchestrator | Sunday 13 April 2025 01:07:46 +0000 (0:00:02.237) 0:02:18.643 ********** 2025-04-13 01:08:50.469340 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:08:50.469352 | orchestrator | 2025-04-13 01:08:50.469365 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-04-13 01:08:50.469384 | orchestrator | Sunday 13 April 2025 01:07:49 +0000 (0:00:02.286) 0:02:20.929 ********** 2025-04-13 01:08:50.469396 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:08:50.469408 | orchestrator | 2025-04-13 01:08:50.469420 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-04-13 01:08:50.469433 | orchestrator | Sunday 13 April 2025 01:07:51 +0000 (0:00:02.150) 0:02:23.080 ********** 2025-04-13 01:08:50.469445 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:08:50.469457 | orchestrator | 2025-04-13 01:08:50.469469 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-04-13 01:08:50.469481 | orchestrator | Sunday 13 April 2025 01:08:16 +0000 (0:00:25.675) 0:02:48.756 ********** 2025-04-13 01:08:50.469493 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:08:50.469506 | orchestrator | 2025-04-13 01:08:50.469518 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-04-13 01:08:50.469530 | orchestrator | Sunday 13 April 2025 01:08:19 +0000 (0:00:02.272) 0:02:51.029 ********** 2025-04-13 01:08:50.469542 | orchestrator | 2025-04-13 01:08:50.469554 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-04-13 01:08:50.469567 | orchestrator | Sunday 13 April 2025 01:08:19 +0000 (0:00:00.058) 0:02:51.087 ********** 2025-04-13 01:08:50.469579 | orchestrator | 2025-04-13 01:08:50.469591 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-04-13 01:08:50.469603 | orchestrator | Sunday 13 April 2025 01:08:19 +0000 (0:00:00.055) 0:02:51.142 ********** 2025-04-13 01:08:50.469615 | orchestrator | 2025-04-13 01:08:50.469627 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-04-13 01:08:50.469639 | orchestrator | Sunday 13 April 2025 01:08:19 +0000 (0:00:00.209) 0:02:51.352 ********** 2025-04-13 01:08:50.469651 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:08:50.469663 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:08:50.469675 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:08:50.469685 | orchestrator | 2025-04-13 01:08:50.469695 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 01:08:50.469706 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-04-13 01:08:50.469717 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-04-13 01:08:50.469727 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-04-13 01:08:50.469737 | orchestrator | 2025-04-13 01:08:50.469747 | orchestrator | 2025-04-13 01:08:50.469757 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 01:08:50.469767 | orchestrator | Sunday 13 April 2025 01:08:47 +0000 (0:00:28.005) 0:03:19.358 ********** 2025-04-13 01:08:50.469784 | orchestrator | =============================================================================== 2025-04-13 01:08:50.469794 | orchestrator | glance : Restart glance-api container ---------------------------------- 28.01s 2025-04-13 01:08:50.469804 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 25.68s 2025-04-13 01:08:50.469814 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 15.34s 2025-04-13 01:08:50.469824 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 9.64s 2025-04-13 01:08:50.469834 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 9.39s 2025-04-13 01:08:50.469844 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 8.57s 2025-04-13 01:08:50.469854 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 8.47s 2025-04-13 01:08:50.469864 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.88s 2025-04-13 01:08:50.469874 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 6.72s 2025-04-13 01:08:50.469889 | orchestrator | glance : Copying over config.json files for services -------------------- 6.68s 2025-04-13 01:08:50.469899 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 6.14s 2025-04-13 01:08:50.469909 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 5.00s 2025-04-13 01:08:50.469919 | orchestrator | glance : Check glance containers ---------------------------------------- 4.92s 2025-04-13 01:08:50.469929 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.86s 2025-04-13 01:08:50.469939 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.25s 2025-04-13 01:08:50.469949 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.21s 2025-04-13 01:08:50.469959 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.00s 2025-04-13 01:08:50.469969 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.84s 2025-04-13 01:08:50.469979 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.77s 2025-04-13 01:08:50.469993 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.57s 2025-04-13 01:08:53.515726 | orchestrator | 2025-04-13 01:08:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:08:53.517887 | orchestrator | 2025-04-13 01:08:53 | INFO  | Task 69f096a8-4a67-4065-9848-1b46d6ddf0ce is in state STARTED 2025-04-13 01:08:53.519586 | orchestrator | 2025-04-13 01:08:53 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:08:53.520543 | orchestrator | 2025-04-13 01:08:53 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:08:53.521381 | orchestrator | 2025-04-13 01:08:53 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:08:53.521424 | orchestrator | 2025-04-13 01:08:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:08:56.570438 | orchestrator | 2025-04-13 01:08:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:08:56.571095 | orchestrator | 2025-04-13 01:08:56 | INFO  | Task 69f096a8-4a67-4065-9848-1b46d6ddf0ce is in state STARTED 2025-04-13 01:08:56.577597 | orchestrator | 2025-04-13 01:08:56 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:08:56.577913 | orchestrator | 2025-04-13 01:08:56 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:08:56.584447 | orchestrator | 2025-04-13 01:08:56 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:08:59.637918 | orchestrator | 2025-04-13 01:08:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:08:59.638182 | orchestrator | 2025-04-13 01:08:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:08:59.639527 | orchestrator | 2025-04-13 01:08:59 | INFO  | Task 69f096a8-4a67-4065-9848-1b46d6ddf0ce is in state STARTED 2025-04-13 01:08:59.641796 | orchestrator | 2025-04-13 01:08:59 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:08:59.643458 | orchestrator | 2025-04-13 01:08:59 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:08:59.644574 | orchestrator | 2025-04-13 01:08:59 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:09:02.702830 | orchestrator | 2025-04-13 01:08:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:09:02.702993 | orchestrator | 2025-04-13 01:09:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:09:02.705011 | orchestrator | 2025-04-13 01:09:02 | INFO  | Task 69f096a8-4a67-4065-9848-1b46d6ddf0ce is in state STARTED 2025-04-13 01:09:02.705084 | orchestrator | 2025-04-13 01:09:02 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:09:02.706222 | orchestrator | 2025-04-13 01:09:02 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:09:02.707556 | orchestrator | 2025-04-13 01:09:02 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:09:05.746235 | orchestrator | 2025-04-13 01:09:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:09:05.746375 | orchestrator | 2025-04-13 01:09:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:09:05.746997 | orchestrator | 2025-04-13 01:09:05 | INFO  | Task 69f096a8-4a67-4065-9848-1b46d6ddf0ce is in state STARTED 2025-04-13 01:09:05.747033 | orchestrator | 2025-04-13 01:09:05 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:09:05.748289 | orchestrator | 2025-04-13 01:09:05 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:09:05.750747 | orchestrator | 2025-04-13 01:09:05 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:09:08.794174 | orchestrator | 2025-04-13 01:09:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:09:08.794322 | orchestrator | 2025-04-13 01:09:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:09:08.796197 | orchestrator | 2025-04-13 01:09:08 | INFO  | Task 69f096a8-4a67-4065-9848-1b46d6ddf0ce is in state STARTED 2025-04-13 01:09:08.798081 | orchestrator | 2025-04-13 01:09:08 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:09:08.799344 | orchestrator | 2025-04-13 01:09:08 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:09:08.800255 | orchestrator | 2025-04-13 01:09:08 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:09:08.800452 | orchestrator | 2025-04-13 01:09:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:09:11.842319 | orchestrator | 2025-04-13 01:09:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:09:11.842578 | orchestrator | 2025-04-13 01:09:11 | INFO  | Task 69f096a8-4a67-4065-9848-1b46d6ddf0ce is in state STARTED 2025-04-13 01:09:11.843304 | orchestrator | 2025-04-13 01:09:11 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:09:11.843930 | orchestrator | 2025-04-13 01:09:11 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:09:11.844596 | orchestrator | 2025-04-13 01:09:11 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:09:14.899091 | orchestrator | 2025-04-13 01:09:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:09:14.899281 | orchestrator | 2025-04-13 01:09:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:09:14.899612 | orchestrator | 2025-04-13 01:09:14 | INFO  | Task 69f096a8-4a67-4065-9848-1b46d6ddf0ce is in state STARTED 2025-04-13 01:09:14.899642 | orchestrator | 2025-04-13 01:09:14 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:09:14.900986 | orchestrator | 2025-04-13 01:09:14 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:09:14.906172 | orchestrator | 2025-04-13 01:09:14 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:09:17.954334 | orchestrator | 2025-04-13 01:09:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:09:17.954497 | orchestrator | 2025-04-13 01:09:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:09:17.956545 | orchestrator | 2025-04-13 01:09:17 | INFO  | Task 69f096a8-4a67-4065-9848-1b46d6ddf0ce is in state STARTED 2025-04-13 01:09:17.957437 | orchestrator | 2025-04-13 01:09:17 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:09:17.960483 | orchestrator | 2025-04-13 01:09:17 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:09:17.960910 | orchestrator | 2025-04-13 01:09:17 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:09:17.960942 | orchestrator | 2025-04-13 01:09:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:09:21.010560 | orchestrator | 2025-04-13 01:09:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:09:21.013236 | orchestrator | 2025-04-13 01:09:21 | INFO  | Task 69f096a8-4a67-4065-9848-1b46d6ddf0ce is in state STARTED 2025-04-13 01:09:21.015110 | orchestrator | 2025-04-13 01:09:21 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:09:21.017576 | orchestrator | 2025-04-13 01:09:21 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:09:21.018218 | orchestrator | 2025-04-13 01:09:21 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:09:21.020586 | orchestrator | 2025-04-13 01:09:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:09:24.060803 | orchestrator | 2025-04-13 01:09:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:09:24.062234 | orchestrator | 2025-04-13 01:09:24 | INFO  | Task 69f096a8-4a67-4065-9848-1b46d6ddf0ce is in state STARTED 2025-04-13 01:09:24.064397 | orchestrator | 2025-04-13 01:09:24 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:09:24.065993 | orchestrator | 2025-04-13 01:09:24 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:09:24.066875 | orchestrator | 2025-04-13 01:09:24 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:09:27.116420 | orchestrator | 2025-04-13 01:09:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:09:27.116599 | orchestrator | 2025-04-13 01:09:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:09:27.117952 | orchestrator | 2025-04-13 01:09:27 | INFO  | Task 69f096a8-4a67-4065-9848-1b46d6ddf0ce is in state STARTED 2025-04-13 01:09:27.120956 | orchestrator | 2025-04-13 01:09:27 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:09:27.123191 | orchestrator | 2025-04-13 01:09:27 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:09:27.124597 | orchestrator | 2025-04-13 01:09:27 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:09:30.169461 | orchestrator | 2025-04-13 01:09:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:09:30.169607 | orchestrator | 2025-04-13 01:09:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:09:30.170804 | orchestrator | 2025-04-13 01:09:30 | INFO  | Task 69f096a8-4a67-4065-9848-1b46d6ddf0ce is in state STARTED 2025-04-13 01:09:30.172458 | orchestrator | 2025-04-13 01:09:30 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:09:30.173739 | orchestrator | 2025-04-13 01:09:30 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:09:30.176585 | orchestrator | 2025-04-13 01:09:30 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:09:33.218249 | orchestrator | 2025-04-13 01:09:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:09:33.218360 | orchestrator | 2025-04-13 01:09:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:09:33.219720 | orchestrator | 2025-04-13 01:09:33 | INFO  | Task 69f096a8-4a67-4065-9848-1b46d6ddf0ce is in state STARTED 2025-04-13 01:09:33.220978 | orchestrator | 2025-04-13 01:09:33 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:09:33.222719 | orchestrator | 2025-04-13 01:09:33 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:09:33.224283 | orchestrator | 2025-04-13 01:09:33 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:09:33.224428 | orchestrator | 2025-04-13 01:09:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:09:36.269152 | orchestrator | 2025-04-13 01:09:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:09:36.270417 | orchestrator | 2025-04-13 01:09:36 | INFO  | Task 69f096a8-4a67-4065-9848-1b46d6ddf0ce is in state SUCCESS 2025-04-13 01:09:36.270466 | orchestrator | 2025-04-13 01:09:36 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:09:36.273334 | orchestrator | 2025-04-13 01:09:36 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:09:36.273541 | orchestrator | 2025-04-13 01:09:36 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:09:39.320516 | orchestrator | 2025-04-13 01:09:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:09:39.320683 | orchestrator | 2025-04-13 01:09:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:09:39.322342 | orchestrator | 2025-04-13 01:09:39 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:09:39.322710 | orchestrator | 2025-04-13 01:09:39 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:09:39.324549 | orchestrator | 2025-04-13 01:09:39 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:09:39.324734 | orchestrator | 2025-04-13 01:09:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:09:42.366374 | orchestrator | 2025-04-13 01:09:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:09:42.366773 | orchestrator | 2025-04-13 01:09:42 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:09:42.367602 | orchestrator | 2025-04-13 01:09:42 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:09:42.368306 | orchestrator | 2025-04-13 01:09:42 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:09:42.368456 | orchestrator | 2025-04-13 01:09:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:09:45.413499 | orchestrator | 2025-04-13 01:09:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:09:45.414458 | orchestrator | 2025-04-13 01:09:45 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:09:45.415486 | orchestrator | 2025-04-13 01:09:45 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:09:45.416930 | orchestrator | 2025-04-13 01:09:45 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:09:48.473779 | orchestrator | 2025-04-13 01:09:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:09:48.473921 | orchestrator | 2025-04-13 01:09:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:09:48.475093 | orchestrator | 2025-04-13 01:09:48 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:09:48.476754 | orchestrator | 2025-04-13 01:09:48 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:09:48.478425 | orchestrator | 2025-04-13 01:09:48 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:09:48.478740 | orchestrator | 2025-04-13 01:09:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:09:51.526578 | orchestrator | 2025-04-13 01:09:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:09:51.529650 | orchestrator | 2025-04-13 01:09:51 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:09:51.531971 | orchestrator | 2025-04-13 01:09:51 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:09:51.532003 | orchestrator | 2025-04-13 01:09:51 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:09:51.532441 | orchestrator | 2025-04-13 01:09:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:09:54.592999 | orchestrator | 2025-04-13 01:09:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:09:54.593463 | orchestrator | 2025-04-13 01:09:54 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:09:54.593498 | orchestrator | 2025-04-13 01:09:54 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:09:54.594680 | orchestrator | 2025-04-13 01:09:54 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:09:57.649919 | orchestrator | 2025-04-13 01:09:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:09:57.650142 | orchestrator | 2025-04-13 01:09:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:09:57.652975 | orchestrator | 2025-04-13 01:09:57 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:09:57.658316 | orchestrator | 2025-04-13 01:09:57 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:09:57.659751 | orchestrator | 2025-04-13 01:09:57 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:10:00.722982 | orchestrator | 2025-04-13 01:09:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:10:00.723210 | orchestrator | 2025-04-13 01:10:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:10:00.723972 | orchestrator | 2025-04-13 01:10:00 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:10:00.725421 | orchestrator | 2025-04-13 01:10:00 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:10:00.727756 | orchestrator | 2025-04-13 01:10:00 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:10:03.773168 | orchestrator | 2025-04-13 01:10:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:10:03.773312 | orchestrator | 2025-04-13 01:10:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:10:03.774684 | orchestrator | 2025-04-13 01:10:03 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:10:03.775945 | orchestrator | 2025-04-13 01:10:03 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:10:03.777724 | orchestrator | 2025-04-13 01:10:03 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:10:06.827197 | orchestrator | 2025-04-13 01:10:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:10:06.827329 | orchestrator | 2025-04-13 01:10:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:10:06.828349 | orchestrator | 2025-04-13 01:10:06 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:10:06.829901 | orchestrator | 2025-04-13 01:10:06 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:10:06.831329 | orchestrator | 2025-04-13 01:10:06 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:10:09.881565 | orchestrator | 2025-04-13 01:10:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:10:09.881714 | orchestrator | 2025-04-13 01:10:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:10:09.882734 | orchestrator | 2025-04-13 01:10:09 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:10:09.884896 | orchestrator | 2025-04-13 01:10:09 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:10:09.887241 | orchestrator | 2025-04-13 01:10:09 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:10:12.933463 | orchestrator | 2025-04-13 01:10:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:10:12.933631 | orchestrator | 2025-04-13 01:10:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:10:12.935089 | orchestrator | 2025-04-13 01:10:12 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:10:12.936514 | orchestrator | 2025-04-13 01:10:12 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:10:12.937997 | orchestrator | 2025-04-13 01:10:12 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:10:15.986719 | orchestrator | 2025-04-13 01:10:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:10:15.986865 | orchestrator | 2025-04-13 01:10:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:10:15.988611 | orchestrator | 2025-04-13 01:10:15 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:10:15.991426 | orchestrator | 2025-04-13 01:10:15 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:10:15.993625 | orchestrator | 2025-04-13 01:10:15 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:10:15.993746 | orchestrator | 2025-04-13 01:10:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:10:19.039240 | orchestrator | 2025-04-13 01:10:19 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:10:19.041146 | orchestrator | 2025-04-13 01:10:19 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:10:19.042508 | orchestrator | 2025-04-13 01:10:19 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:10:19.044326 | orchestrator | 2025-04-13 01:10:19 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:10:22.090933 | orchestrator | 2025-04-13 01:10:19 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:10:22.091153 | orchestrator | 2025-04-13 01:10:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:10:22.092528 | orchestrator | 2025-04-13 01:10:22 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:10:22.093224 | orchestrator | 2025-04-13 01:10:22 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state STARTED 2025-04-13 01:10:22.095780 | orchestrator | 2025-04-13 01:10:22 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:10:25.140674 | orchestrator | 2025-04-13 01:10:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:10:25.140833 | orchestrator | 2025-04-13 01:10:25 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:10:25.142369 | orchestrator | 2025-04-13 01:10:25 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:10:25.144206 | orchestrator | 2025-04-13 01:10:25 | INFO  | Task 438a0df2-370f-4295-942f-fb64fe2f21f1 is in state SUCCESS 2025-04-13 01:10:25.146672 | orchestrator | 2025-04-13 01:10:25 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:10:28.203800 | orchestrator | 2025-04-13 01:10:25 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:10:28.203971 | orchestrator | 2025-04-13 01:10:28 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:10:28.205523 | orchestrator | 2025-04-13 01:10:28 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:10:28.207062 | orchestrator | 2025-04-13 01:10:28 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:10:28.207257 | orchestrator | 2025-04-13 01:10:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:10:31.255337 | orchestrator | 2025-04-13 01:10:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:10:31.256162 | orchestrator | 2025-04-13 01:10:31 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:10:31.257640 | orchestrator | 2025-04-13 01:10:31 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:10:34.310981 | orchestrator | 2025-04-13 01:10:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:10:34.311241 | orchestrator | 2025-04-13 01:10:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:10:34.311805 | orchestrator | 2025-04-13 01:10:34 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:10:34.313829 | orchestrator | 2025-04-13 01:10:34 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:10:37.366607 | orchestrator | 2025-04-13 01:10:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:10:37.366746 | orchestrator | 2025-04-13 01:10:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:10:37.367481 | orchestrator | 2025-04-13 01:10:37 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:10:37.368985 | orchestrator | 2025-04-13 01:10:37 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:10:40.412520 | orchestrator | 2025-04-13 01:10:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:10:40.412616 | orchestrator | 2025-04-13 01:10:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:10:43.459548 | orchestrator | 2025-04-13 01:10:40 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:10:43.459641 | orchestrator | 2025-04-13 01:10:40 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:10:43.459654 | orchestrator | 2025-04-13 01:10:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:10:43.459699 | orchestrator | 2025-04-13 01:10:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:10:43.460809 | orchestrator | 2025-04-13 01:10:43 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:10:43.461381 | orchestrator | 2025-04-13 01:10:43 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state STARTED 2025-04-13 01:10:46.498105 | orchestrator | 2025-04-13 01:10:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:10:46.498252 | orchestrator | 2025-04-13 01:10:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:10:46.498817 | orchestrator | 2025-04-13 01:10:46 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:10:46.500930 | orchestrator | 2025-04-13 01:10:46 | INFO  | Task 3556ed93-f9d5-40d4-9f7e-0a415c491104 is in state SUCCESS 2025-04-13 01:10:46.502961 | orchestrator | 2025-04-13 01:10:46.503214 | orchestrator | 2025-04-13 01:10:46.503248 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 01:10:46.503266 | orchestrator | 2025-04-13 01:10:46.503280 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 01:10:46.503637 | orchestrator | Sunday 13 April 2025 01:08:40 +0000 (0:00:00.551) 0:00:00.551 ********** 2025-04-13 01:10:46.503663 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:10:46.503679 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:10:46.503695 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:10:46.503709 | orchestrator | 2025-04-13 01:10:46.503724 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 01:10:46.503738 | orchestrator | Sunday 13 April 2025 01:08:41 +0000 (0:00:00.459) 0:00:01.010 ********** 2025-04-13 01:10:46.503752 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-04-13 01:10:46.503766 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-04-13 01:10:46.503780 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-04-13 01:10:46.503793 | orchestrator | 2025-04-13 01:10:46.503807 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-04-13 01:10:46.503821 | orchestrator | 2025-04-13 01:10:46.503835 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-04-13 01:10:46.503849 | orchestrator | Sunday 13 April 2025 01:08:41 +0000 (0:00:00.243) 0:00:01.254 ********** 2025-04-13 01:10:46.503863 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:10:46.503878 | orchestrator | 2025-04-13 01:10:46.503892 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-04-13 01:10:46.503906 | orchestrator | Sunday 13 April 2025 01:08:42 +0000 (0:00:00.641) 0:00:01.895 ********** 2025-04-13 01:10:46.503921 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-04-13 01:10:46.503935 | orchestrator | 2025-04-13 01:10:46.504029 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-04-13 01:10:46.504051 | orchestrator | Sunday 13 April 2025 01:08:45 +0000 (0:00:03.471) 0:00:05.366 ********** 2025-04-13 01:10:46.504066 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-04-13 01:10:46.504687 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-04-13 01:10:46.504714 | orchestrator | 2025-04-13 01:10:46.504729 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-04-13 01:10:46.504743 | orchestrator | Sunday 13 April 2025 01:08:52 +0000 (0:00:06.763) 0:00:12.130 ********** 2025-04-13 01:10:46.504757 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-13 01:10:46.504772 | orchestrator | 2025-04-13 01:10:46.504786 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-04-13 01:10:46.504800 | orchestrator | Sunday 13 April 2025 01:08:55 +0000 (0:00:03.417) 0:00:15.547 ********** 2025-04-13 01:10:46.504836 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-13 01:10:46.504850 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-04-13 01:10:46.504864 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-04-13 01:10:46.504878 | orchestrator | 2025-04-13 01:10:46.504892 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-04-13 01:10:46.504906 | orchestrator | Sunday 13 April 2025 01:09:03 +0000 (0:00:08.148) 0:00:23.696 ********** 2025-04-13 01:10:46.504925 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-13 01:10:46.504940 | orchestrator | 2025-04-13 01:10:46.504954 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-04-13 01:10:46.504968 | orchestrator | Sunday 13 April 2025 01:09:07 +0000 (0:00:03.346) 0:00:27.043 ********** 2025-04-13 01:10:46.504982 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-04-13 01:10:46.504996 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-04-13 01:10:46.505009 | orchestrator | 2025-04-13 01:10:46.505023 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-04-13 01:10:46.505037 | orchestrator | Sunday 13 April 2025 01:09:15 +0000 (0:00:07.771) 0:00:34.814 ********** 2025-04-13 01:10:46.505051 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-04-13 01:10:46.505065 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-04-13 01:10:46.505079 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-04-13 01:10:46.505131 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-04-13 01:10:46.505145 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-04-13 01:10:46.505159 | orchestrator | 2025-04-13 01:10:46.505173 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-04-13 01:10:46.505187 | orchestrator | Sunday 13 April 2025 01:09:30 +0000 (0:00:15.956) 0:00:50.771 ********** 2025-04-13 01:10:46.505201 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:10:46.505215 | orchestrator | 2025-04-13 01:10:46.505228 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-04-13 01:10:46.505242 | orchestrator | Sunday 13 April 2025 01:09:31 +0000 (0:00:00.768) 0:00:51.539 ********** 2025-04-13 01:10:46.505304 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.: ", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request.: "} 2025-04-13 01:10:46.505326 | orchestrator | 2025-04-13 01:10:46.505341 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 01:10:46.505362 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-04-13 01:10:46.505379 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 01:10:46.505396 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 01:10:46.505411 | orchestrator | 2025-04-13 01:10:46.505427 | orchestrator | 2025-04-13 01:10:46.505443 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 01:10:46.505472 | orchestrator | Sunday 13 April 2025 01:09:35 +0000 (0:00:03.368) 0:00:54.908 ********** 2025-04-13 01:10:46.505488 | orchestrator | =============================================================================== 2025-04-13 01:10:46.505512 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.96s 2025-04-13 01:10:46.505529 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.15s 2025-04-13 01:10:46.505545 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.77s 2025-04-13 01:10:46.505560 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.76s 2025-04-13 01:10:46.505575 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.47s 2025-04-13 01:10:46.505591 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.42s 2025-04-13 01:10:46.505607 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.37s 2025-04-13 01:10:46.505622 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.35s 2025-04-13 01:10:46.505638 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.77s 2025-04-13 01:10:46.505653 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.64s 2025-04-13 01:10:46.505668 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.46s 2025-04-13 01:10:46.505682 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.24s 2025-04-13 01:10:46.505696 | orchestrator | 2025-04-13 01:10:46.505709 | orchestrator | 2025-04-13 01:10:46.505728 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 01:10:46.505742 | orchestrator | 2025-04-13 01:10:46.505756 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 01:10:46.505770 | orchestrator | Sunday 13 April 2025 01:08:09 +0000 (0:00:00.216) 0:00:00.216 ********** 2025-04-13 01:10:46.505784 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:10:46.505799 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:10:46.505813 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:10:46.505827 | orchestrator | 2025-04-13 01:10:46.505841 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 01:10:46.505855 | orchestrator | Sunday 13 April 2025 01:08:10 +0000 (0:00:00.415) 0:00:00.631 ********** 2025-04-13 01:10:46.505868 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-04-13 01:10:46.505882 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-04-13 01:10:46.505896 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-04-13 01:10:46.505910 | orchestrator | 2025-04-13 01:10:46.505924 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-04-13 01:10:46.505937 | orchestrator | 2025-04-13 01:10:46.505951 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-04-13 01:10:46.505965 | orchestrator | Sunday 13 April 2025 01:08:10 +0000 (0:00:00.480) 0:00:01.111 ********** 2025-04-13 01:10:46.505979 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:10:46.505994 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:10:46.506070 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:10:46.506147 | orchestrator | 2025-04-13 01:10:46.506164 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 01:10:46.506178 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 01:10:46.506192 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 01:10:46.506207 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 01:10:46.506221 | orchestrator | 2025-04-13 01:10:46.506234 | orchestrator | 2025-04-13 01:10:46.506248 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 01:10:46.506262 | orchestrator | Sunday 13 April 2025 01:10:22 +0000 (0:02:11.913) 0:02:13.025 ********** 2025-04-13 01:10:46.506276 | orchestrator | =============================================================================== 2025-04-13 01:10:46.506299 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 131.91s 2025-04-13 01:10:46.506313 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2025-04-13 01:10:46.506327 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2025-04-13 01:10:46.506339 | orchestrator | 2025-04-13 01:10:46.506352 | orchestrator | 2025-04-13 01:10:46.506364 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 01:10:46.506376 | orchestrator | 2025-04-13 01:10:46.506389 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 01:10:46.506438 | orchestrator | Sunday 13 April 2025 01:08:50 +0000 (0:00:00.331) 0:00:00.331 ********** 2025-04-13 01:10:46.506453 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:10:46.506466 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:10:46.506478 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:10:46.506491 | orchestrator | 2025-04-13 01:10:46.506503 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 01:10:46.506515 | orchestrator | Sunday 13 April 2025 01:08:51 +0000 (0:00:00.433) 0:00:00.764 ********** 2025-04-13 01:10:46.506528 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-04-13 01:10:46.506540 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-04-13 01:10:46.506552 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-04-13 01:10:46.506565 | orchestrator | 2025-04-13 01:10:46.506577 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-04-13 01:10:46.506589 | orchestrator | 2025-04-13 01:10:46.506602 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-04-13 01:10:46.506614 | orchestrator | Sunday 13 April 2025 01:08:51 +0000 (0:00:00.336) 0:00:01.101 ********** 2025-04-13 01:10:46.506627 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:10:46.506639 | orchestrator | 2025-04-13 01:10:46.506652 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-04-13 01:10:46.506664 | orchestrator | Sunday 13 April 2025 01:08:52 +0000 (0:00:00.770) 0:00:01.872 ********** 2025-04-13 01:10:46.506678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-13 01:10:46.506695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-13 01:10:46.506709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-13 01:10:46.506729 | orchestrator | 2025-04-13 01:10:46.506741 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-04-13 01:10:46.506753 | orchestrator | Sunday 13 April 2025 01:08:53 +0000 (0:00:01.150) 0:00:03.022 ********** 2025-04-13 01:10:46.506766 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-04-13 01:10:46.506785 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-04-13 01:10:46.506797 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-13 01:10:46.506810 | orchestrator | 2025-04-13 01:10:46.506822 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-04-13 01:10:46.506834 | orchestrator | Sunday 13 April 2025 01:08:54 +0000 (0:00:00.521) 0:00:03.543 ********** 2025-04-13 01:10:46.506846 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:10:46.506859 | orchestrator | 2025-04-13 01:10:46.506871 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-04-13 01:10:46.506884 | orchestrator | Sunday 13 April 2025 01:08:54 +0000 (0:00:00.597) 0:00:04.141 ********** 2025-04-13 01:10:46.506932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-13 01:10:46.506949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-13 01:10:46.506962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-13 01:10:46.506975 | orchestrator | 2025-04-13 01:10:46.506987 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-04-13 01:10:46.507005 | orchestrator | Sunday 13 April 2025 01:08:56 +0000 (0:00:01.401) 0:00:05.543 ********** 2025-04-13 01:10:46.507017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-13 01:10:46.507043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-13 01:10:46.507056 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:10:46.507069 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:10:46.507126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-13 01:10:46.507143 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:10:46.507155 | orchestrator | 2025-04-13 01:10:46.507168 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-04-13 01:10:46.507180 | orchestrator | Sunday 13 April 2025 01:08:56 +0000 (0:00:00.530) 0:00:06.073 ********** 2025-04-13 01:10:46.507193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-13 01:10:46.507206 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:10:46.507218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-13 01:10:46.507231 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:10:46.507244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-13 01:10:46.507264 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:10:46.507276 | orchestrator | 2025-04-13 01:10:46.507289 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-04-13 01:10:46.507301 | orchestrator | Sunday 13 April 2025 01:08:57 +0000 (0:00:00.809) 0:00:06.883 ********** 2025-04-13 01:10:46.507314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-13 01:10:46.507326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-13 01:10:46.507365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-13 01:10:46.507379 | orchestrator | 2025-04-13 01:10:46.507392 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-04-13 01:10:46.507404 | orchestrator | Sunday 13 April 2025 01:08:58 +0000 (0:00:01.421) 0:00:08.305 ********** 2025-04-13 01:10:46.507417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-13 01:10:46.507430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-13 01:10:46.507565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-13 01:10:46.507693 | orchestrator | 2025-04-13 01:10:46.507718 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-04-13 01:10:46.507734 | orchestrator | Sunday 13 April 2025 01:09:00 +0000 (0:00:01.728) 0:00:10.033 ********** 2025-04-13 01:10:46.507749 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:10:46.507764 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:10:46.507778 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:10:46.507792 | orchestrator | 2025-04-13 01:10:46.507806 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-04-13 01:10:46.507820 | orchestrator | Sunday 13 April 2025 01:09:00 +0000 (0:00:00.276) 0:00:10.310 ********** 2025-04-13 01:10:46.507834 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-04-13 01:10:46.507848 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-04-13 01:10:46.507862 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-04-13 01:10:46.507875 | orchestrator | 2025-04-13 01:10:46.507889 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-04-13 01:10:46.507902 | orchestrator | Sunday 13 April 2025 01:09:02 +0000 (0:00:01.419) 0:00:11.729 ********** 2025-04-13 01:10:46.507917 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-04-13 01:10:46.507931 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-04-13 01:10:46.507944 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-04-13 01:10:46.507958 | orchestrator | 2025-04-13 01:10:46.508073 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-04-13 01:10:46.508138 | orchestrator | Sunday 13 April 2025 01:09:03 +0000 (0:00:01.321) 0:00:13.050 ********** 2025-04-13 01:10:46.508154 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-13 01:10:46.508168 | orchestrator | 2025-04-13 01:10:46.508182 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-04-13 01:10:46.508196 | orchestrator | Sunday 13 April 2025 01:09:04 +0000 (0:00:00.561) 0:00:13.612 ********** 2025-04-13 01:10:46.508210 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-04-13 01:10:46.508224 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-04-13 01:10:46.508237 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:10:46.508252 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:10:46.508266 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:10:46.508279 | orchestrator | 2025-04-13 01:10:46.508322 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-04-13 01:10:46.508337 | orchestrator | Sunday 13 April 2025 01:09:05 +0000 (0:00:00.922) 0:00:14.535 ********** 2025-04-13 01:10:46.508351 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:10:46.508365 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:10:46.508378 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:10:46.508392 | orchestrator | 2025-04-13 01:10:46.508406 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-04-13 01:10:46.508420 | orchestrator | Sunday 13 April 2025 01:09:05 +0000 (0:00:00.465) 0:00:15.001 ********** 2025-04-13 01:10:46.508437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1071782, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2128925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.508452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1071782, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2128925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.508467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1071782, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2128925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.508482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1071726, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2008922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.508538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1071726, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2008922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.508556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1071726, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2008922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.508580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1071720, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.1888921, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.508604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1071720, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.1888921, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.508631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1071720, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.1888921, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.508657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1071775, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2098925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.508683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1071775, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2098925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.508765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1071775, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2098925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.508810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1071701, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.179892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.508833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1071701, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.179892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.508848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1071701, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.179892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.508862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1071721, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.1908922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.508877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1071721, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.1908922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.508899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1071721, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.1908922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.508922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1071773, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2088923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.508937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1071773, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2088923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.508951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1071773, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2088923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.508965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1071697, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.178892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.508979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1071697, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.178892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1071697, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.178892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1071665, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.1698918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1071665, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.1698918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1071665, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.1698918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1071706, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.181892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1071706, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.181892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1071706, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.181892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1071676, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.1738918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1071676, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.1738918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1071676, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.1738918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1071770, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.2088923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1071770, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.2088923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1071770, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.2088923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1071715, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.185892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1071715, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.185892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1071715, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.185892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1071777, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2108924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1071777, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2108924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1071777, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2108924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1071695, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.178892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1071695, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.178892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1071695, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.178892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1071725, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.1918921, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1071725, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.1918921, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1071725, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.1918921, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1071667, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.1728919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1071667, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.1728919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1071667, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.1728919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1071679, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.174892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1071679, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.174892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1071679, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.174892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1071717, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.1878922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1071717, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.1878922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1071717, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.1878922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1071812, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.233893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1071812, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.233893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1071812, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.233893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1071805, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2248926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1071805, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2248926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1071805, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2248926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1071835, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.3998957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1071835, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.3998957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1071835, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.3998957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1071784, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2138925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1071784, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2138925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1071784, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2138925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1072077, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4038956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.509982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1072077, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4038956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1072077, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4038956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1071822, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2348928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1071822, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2348928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1071822, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2348928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1071824, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2358928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1071824, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2358928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1071824, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2358928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1071785, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2148926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1071785, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2148926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1071785, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2148926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1071809, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2258928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1071809, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2258928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1071809, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2258928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1072083, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4058957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1072083, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4058957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1072083, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4058957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1071830, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.2378929, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1071830, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.2378929, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1071830, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744503272.2378929, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1071787, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2178926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1071787, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2178926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1071787, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2178926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1071786, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2158926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1071786, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2158926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1071786, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2158926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1071792, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2188926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1071792, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2188926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1071792, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2188926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1071795, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2248926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1071795, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2248926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1071795, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.2248926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1072088, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4068956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1072088, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4068956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1072088, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744503272.4068956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-13 01:10:46.510767 | orchestrator | 2025-04-13 01:10:46.510782 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-04-13 01:10:46.510797 | orchestrator | Sunday 13 April 2025 01:09:38 +0000 (0:00:33.095) 0:00:48.096 ********** 2025-04-13 01:10:46.510817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-13 01:10:46.510833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-13 01:10:46.510848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-13 01:10:46.510870 | orchestrator | 2025-04-13 01:10:46.510884 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-04-13 01:10:46.510898 | orchestrator | Sunday 13 April 2025 01:09:39 +0000 (0:00:01.070) 0:00:49.167 ********** 2025-04-13 01:10:46.510912 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:10:46.510926 | orchestrator | 2025-04-13 01:10:46.510940 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-04-13 01:10:46.510953 | orchestrator | Sunday 13 April 2025 01:09:42 +0000 (0:00:02.564) 0:00:51.732 ********** 2025-04-13 01:10:46.510967 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:10:46.510980 | orchestrator | 2025-04-13 01:10:46.510994 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-04-13 01:10:46.511008 | orchestrator | Sunday 13 April 2025 01:09:44 +0000 (0:00:02.304) 0:00:54.036 ********** 2025-04-13 01:10:46.511021 | orchestrator | 2025-04-13 01:10:46.511035 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-04-13 01:10:46.511063 | orchestrator | Sunday 13 April 2025 01:09:44 +0000 (0:00:00.058) 0:00:54.095 ********** 2025-04-13 01:10:46.511077 | orchestrator | 2025-04-13 01:10:46.511150 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-04-13 01:10:46.511165 | orchestrator | Sunday 13 April 2025 01:09:44 +0000 (0:00:00.057) 0:00:54.152 ********** 2025-04-13 01:10:46.511179 | orchestrator | 2025-04-13 01:10:46.511193 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-04-13 01:10:46.511206 | orchestrator | Sunday 13 April 2025 01:09:44 +0000 (0:00:00.194) 0:00:54.346 ********** 2025-04-13 01:10:46.511218 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:10:46.511230 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:10:46.511242 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:10:46.511254 | orchestrator | 2025-04-13 01:10:46.511267 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-04-13 01:10:46.511279 | orchestrator | Sunday 13 April 2025 01:09:51 +0000 (0:00:06.990) 0:01:01.337 ********** 2025-04-13 01:10:46.511291 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:10:46.511303 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:10:46.511315 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-04-13 01:10:46.511328 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-04-13 01:10:46.511340 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:10:46.511353 | orchestrator | 2025-04-13 01:10:46.511365 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-04-13 01:10:46.511377 | orchestrator | Sunday 13 April 2025 01:10:18 +0000 (0:00:26.731) 0:01:28.068 ********** 2025-04-13 01:10:46.511389 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:10:46.511401 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:10:46.511414 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:10:46.511426 | orchestrator | 2025-04-13 01:10:46.511438 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-04-13 01:10:46.511450 | orchestrator | Sunday 13 April 2025 01:10:37 +0000 (0:00:18.787) 0:01:46.856 ********** 2025-04-13 01:10:46.511462 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:10:46.511474 | orchestrator | 2025-04-13 01:10:46.511486 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-04-13 01:10:46.511499 | orchestrator | Sunday 13 April 2025 01:10:39 +0000 (0:00:02.314) 0:01:49.171 ********** 2025-04-13 01:10:46.511518 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:10:46.511537 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:10:49.550409 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:10:49.550530 | orchestrator | 2025-04-13 01:10:49.550549 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-04-13 01:10:49.550565 | orchestrator | Sunday 13 April 2025 01:10:40 +0000 (0:00:00.649) 0:01:49.820 ********** 2025-04-13 01:10:49.550581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-04-13 01:10:49.550599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-04-13 01:10:49.550614 | orchestrator | 2025-04-13 01:10:49.550629 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-04-13 01:10:49.550642 | orchestrator | Sunday 13 April 2025 01:10:42 +0000 (0:00:02.552) 0:01:52.373 ********** 2025-04-13 01:10:49.550656 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:10:49.550670 | orchestrator | 2025-04-13 01:10:49.550684 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 01:10:49.550698 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-13 01:10:49.550714 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-13 01:10:49.550727 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-13 01:10:49.550741 | orchestrator | 2025-04-13 01:10:49.550755 | orchestrator | 2025-04-13 01:10:49.550769 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 01:10:49.550782 | orchestrator | Sunday 13 April 2025 01:10:43 +0000 (0:00:00.628) 0:01:53.001 ********** 2025-04-13 01:10:49.550796 | orchestrator | =============================================================================== 2025-04-13 01:10:49.550810 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 33.10s 2025-04-13 01:10:49.550824 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.73s 2025-04-13 01:10:49.550837 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 18.79s 2025-04-13 01:10:49.550851 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.99s 2025-04-13 01:10:49.550864 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.56s 2025-04-13 01:10:49.550878 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.55s 2025-04-13 01:10:49.550891 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.31s 2025-04-13 01:10:49.550905 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.30s 2025-04-13 01:10:49.550919 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.73s 2025-04-13 01:10:49.550935 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.42s 2025-04-13 01:10:49.550975 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.42s 2025-04-13 01:10:49.550991 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.40s 2025-04-13 01:10:49.551006 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.32s 2025-04-13 01:10:49.551022 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.15s 2025-04-13 01:10:49.551068 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.07s 2025-04-13 01:10:49.551119 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.92s 2025-04-13 01:10:49.551137 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.81s 2025-04-13 01:10:49.551152 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.77s 2025-04-13 01:10:49.551167 | orchestrator | grafana : Remove old grafana docker volume ------------------------------ 0.65s 2025-04-13 01:10:49.551182 | orchestrator | grafana : Disable Getting Started panel --------------------------------- 0.63s 2025-04-13 01:10:49.551198 | orchestrator | 2025-04-13 01:10:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:10:49.551233 | orchestrator | 2025-04-13 01:10:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:10:49.552771 | orchestrator | 2025-04-13 01:10:49 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:10:52.609506 | orchestrator | 2025-04-13 01:10:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:10:52.609649 | orchestrator | 2025-04-13 01:10:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:10:52.611205 | orchestrator | 2025-04-13 01:10:52 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:10:52.611535 | orchestrator | 2025-04-13 01:10:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:10:55.664984 | orchestrator | 2025-04-13 01:10:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:10:55.665613 | orchestrator | 2025-04-13 01:10:55 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:10:58.724580 | orchestrator | 2025-04-13 01:10:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:10:58.724715 | orchestrator | 2025-04-13 01:10:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:10:58.726484 | orchestrator | 2025-04-13 01:10:58 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:11:01.781027 | orchestrator | 2025-04-13 01:10:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:11:01.781206 | orchestrator | 2025-04-13 01:11:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:11:01.782526 | orchestrator | 2025-04-13 01:11:01 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:11:04.838329 | orchestrator | 2025-04-13 01:11:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:11:04.838480 | orchestrator | 2025-04-13 01:11:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:11:07.877584 | orchestrator | 2025-04-13 01:11:04 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:11:07.877731 | orchestrator | 2025-04-13 01:11:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:11:07.877783 | orchestrator | 2025-04-13 01:11:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:11:07.879180 | orchestrator | 2025-04-13 01:11:07 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:11:10.920399 | orchestrator | 2025-04-13 01:11:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:11:10.920500 | orchestrator | 2025-04-13 01:11:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:11:10.920682 | orchestrator | 2025-04-13 01:11:10 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:11:13.974233 | orchestrator | 2025-04-13 01:11:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:11:13.974389 | orchestrator | 2025-04-13 01:11:13 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:11:17.027048 | orchestrator | 2025-04-13 01:11:13 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:11:17.027209 | orchestrator | 2025-04-13 01:11:13 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:11:17.027247 | orchestrator | 2025-04-13 01:11:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:11:17.027842 | orchestrator | 2025-04-13 01:11:17 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:11:20.076514 | orchestrator | 2025-04-13 01:11:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:11:20.076659 | orchestrator | 2025-04-13 01:11:20 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:11:20.078328 | orchestrator | 2025-04-13 01:11:20 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:11:23.121658 | orchestrator | 2025-04-13 01:11:20 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:11:23.121811 | orchestrator | 2025-04-13 01:11:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:11:26.161913 | orchestrator | 2025-04-13 01:11:23 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:11:26.162156 | orchestrator | 2025-04-13 01:11:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:11:26.162213 | orchestrator | 2025-04-13 01:11:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:11:26.163148 | orchestrator | 2025-04-13 01:11:26 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:11:29.212146 | orchestrator | 2025-04-13 01:11:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:11:29.212287 | orchestrator | 2025-04-13 01:11:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:11:29.213847 | orchestrator | 2025-04-13 01:11:29 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:11:29.214274 | orchestrator | 2025-04-13 01:11:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:11:32.256614 | orchestrator | 2025-04-13 01:11:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:11:32.257507 | orchestrator | 2025-04-13 01:11:32 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:11:35.292810 | orchestrator | 2025-04-13 01:11:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:11:35.292985 | orchestrator | 2025-04-13 01:11:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:11:35.294391 | orchestrator | 2025-04-13 01:11:35 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:11:38.351794 | orchestrator | 2025-04-13 01:11:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:11:38.351942 | orchestrator | 2025-04-13 01:11:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:11:41.400324 | orchestrator | 2025-04-13 01:11:38 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:11:41.400453 | orchestrator | 2025-04-13 01:11:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:11:41.400494 | orchestrator | 2025-04-13 01:11:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:11:41.403372 | orchestrator | 2025-04-13 01:11:41 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:11:41.403485 | orchestrator | 2025-04-13 01:11:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:11:44.457677 | orchestrator | 2025-04-13 01:11:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:11:44.457929 | orchestrator | 2025-04-13 01:11:44 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:11:47.512116 | orchestrator | 2025-04-13 01:11:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:11:47.512235 | orchestrator | 2025-04-13 01:11:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:11:47.512843 | orchestrator | 2025-04-13 01:11:47 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:11:50.562520 | orchestrator | 2025-04-13 01:11:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:11:50.562663 | orchestrator | 2025-04-13 01:11:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:11:53.609343 | orchestrator | 2025-04-13 01:11:50 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:11:53.609471 | orchestrator | 2025-04-13 01:11:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:11:53.609511 | orchestrator | 2025-04-13 01:11:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:11:53.610820 | orchestrator | 2025-04-13 01:11:53 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:11:53.611234 | orchestrator | 2025-04-13 01:11:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:11:56.658160 | orchestrator | 2025-04-13 01:11:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:11:56.660477 | orchestrator | 2025-04-13 01:11:56 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:11:59.709821 | orchestrator | 2025-04-13 01:11:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:11:59.709966 | orchestrator | 2025-04-13 01:11:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:11:59.711971 | orchestrator | 2025-04-13 01:11:59 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:12:02.765235 | orchestrator | 2025-04-13 01:11:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:12:02.765363 | orchestrator | 2025-04-13 01:12:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:12:02.766937 | orchestrator | 2025-04-13 01:12:02 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:12:05.824767 | orchestrator | 2025-04-13 01:12:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:12:05.824932 | orchestrator | 2025-04-13 01:12:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:12:05.825823 | orchestrator | 2025-04-13 01:12:05 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:12:08.869821 | orchestrator | 2025-04-13 01:12:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:12:08.869970 | orchestrator | 2025-04-13 01:12:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:12:08.871297 | orchestrator | 2025-04-13 01:12:08 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:12:11.927999 | orchestrator | 2025-04-13 01:12:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:12:11.928183 | orchestrator | 2025-04-13 01:12:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:12:11.929254 | orchestrator | 2025-04-13 01:12:11 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:12:11.930311 | orchestrator | 2025-04-13 01:12:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:12:14.974701 | orchestrator | 2025-04-13 01:12:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:12:14.974929 | orchestrator | 2025-04-13 01:12:14 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:12:14.974961 | orchestrator | 2025-04-13 01:12:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:12:18.026486 | orchestrator | 2025-04-13 01:12:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:12:18.029163 | orchestrator | 2025-04-13 01:12:18 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:12:21.079885 | orchestrator | 2025-04-13 01:12:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:12:21.080028 | orchestrator | 2025-04-13 01:12:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:12:21.080984 | orchestrator | 2025-04-13 01:12:21 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:12:24.121806 | orchestrator | 2025-04-13 01:12:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:12:24.121959 | orchestrator | 2025-04-13 01:12:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:12:24.122813 | orchestrator | 2025-04-13 01:12:24 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:12:27.167289 | orchestrator | 2025-04-13 01:12:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:12:27.167435 | orchestrator | 2025-04-13 01:12:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:12:27.168613 | orchestrator | 2025-04-13 01:12:27 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:12:30.213336 | orchestrator | 2025-04-13 01:12:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:12:30.213481 | orchestrator | 2025-04-13 01:12:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:12:33.277960 | orchestrator | 2025-04-13 01:12:30 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:12:33.278289 | orchestrator | 2025-04-13 01:12:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:12:33.278339 | orchestrator | 2025-04-13 01:12:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:12:33.279299 | orchestrator | 2025-04-13 01:12:33 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:12:33.279466 | orchestrator | 2025-04-13 01:12:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:12:36.336384 | orchestrator | 2025-04-13 01:12:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:12:36.337855 | orchestrator | 2025-04-13 01:12:36 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:12:39.386897 | orchestrator | 2025-04-13 01:12:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:12:39.387028 | orchestrator | 2025-04-13 01:12:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:12:39.387744 | orchestrator | 2025-04-13 01:12:39 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:12:42.436821 | orchestrator | 2025-04-13 01:12:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:12:42.436992 | orchestrator | 2025-04-13 01:12:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:12:42.437755 | orchestrator | 2025-04-13 01:12:42 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:12:42.437991 | orchestrator | 2025-04-13 01:12:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:12:45.489245 | orchestrator | 2025-04-13 01:12:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:12:45.492247 | orchestrator | 2025-04-13 01:12:45 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:12:48.544717 | orchestrator | 2025-04-13 01:12:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:12:48.544864 | orchestrator | 2025-04-13 01:12:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:12:48.547982 | orchestrator | 2025-04-13 01:12:48 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:12:51.600436 | orchestrator | 2025-04-13 01:12:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:12:51.600739 | orchestrator | 2025-04-13 01:12:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:12:54.670615 | orchestrator | 2025-04-13 01:12:51 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:12:54.670934 | orchestrator | 2025-04-13 01:12:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:12:54.670988 | orchestrator | 2025-04-13 01:12:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:12:57.722700 | orchestrator | 2025-04-13 01:12:54 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:12:57.722827 | orchestrator | 2025-04-13 01:12:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:12:57.722868 | orchestrator | 2025-04-13 01:12:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:12:57.723986 | orchestrator | 2025-04-13 01:12:57 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:13:00.772336 | orchestrator | 2025-04-13 01:12:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:13:00.772477 | orchestrator | 2025-04-13 01:13:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:13:00.774521 | orchestrator | 2025-04-13 01:13:00 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:13:03.826579 | orchestrator | 2025-04-13 01:13:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:13:03.826719 | orchestrator | 2025-04-13 01:13:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:13:03.828179 | orchestrator | 2025-04-13 01:13:03 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:13:06.875741 | orchestrator | 2025-04-13 01:13:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:13:06.875884 | orchestrator | 2025-04-13 01:13:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:13:06.876693 | orchestrator | 2025-04-13 01:13:06 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:13:09.930845 | orchestrator | 2025-04-13 01:13:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:13:09.930966 | orchestrator | 2025-04-13 01:13:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:13:09.933002 | orchestrator | 2025-04-13 01:13:09 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:13:09.933150 | orchestrator | 2025-04-13 01:13:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:13:12.984340 | orchestrator | 2025-04-13 01:13:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:13:12.987627 | orchestrator | 2025-04-13 01:13:12 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:13:16.040848 | orchestrator | 2025-04-13 01:13:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:13:16.041003 | orchestrator | 2025-04-13 01:13:16 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:13:16.041723 | orchestrator | 2025-04-13 01:13:16 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:13:19.096752 | orchestrator | 2025-04-13 01:13:16 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:13:19.096893 | orchestrator | 2025-04-13 01:13:19 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:13:19.099029 | orchestrator | 2025-04-13 01:13:19 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:13:19.100064 | orchestrator | 2025-04-13 01:13:19 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:13:22.152482 | orchestrator | 2025-04-13 01:13:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:13:22.153866 | orchestrator | 2025-04-13 01:13:22 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:13:25.225909 | orchestrator | 2025-04-13 01:13:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:13:25.226136 | orchestrator | 2025-04-13 01:13:25 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:13:25.230441 | orchestrator | 2025-04-13 01:13:25 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:13:25.231955 | orchestrator | 2025-04-13 01:13:25 | INFO  | Task 482798f6-bbbf-4de3-9a3a-0df683f19f9a is in state STARTED 2025-04-13 01:13:25.232419 | orchestrator | 2025-04-13 01:13:25 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:13:28.295117 | orchestrator | 2025-04-13 01:13:28 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:13:28.296413 | orchestrator | 2025-04-13 01:13:28 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:13:28.298328 | orchestrator | 2025-04-13 01:13:28 | INFO  | Task 482798f6-bbbf-4de3-9a3a-0df683f19f9a is in state STARTED 2025-04-13 01:13:28.298807 | orchestrator | 2025-04-13 01:13:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:13:31.358594 | orchestrator | 2025-04-13 01:13:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:13:31.362190 | orchestrator | 2025-04-13 01:13:31 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:13:31.364138 | orchestrator | 2025-04-13 01:13:31 | INFO  | Task 482798f6-bbbf-4de3-9a3a-0df683f19f9a is in state STARTED 2025-04-13 01:13:31.364689 | orchestrator | 2025-04-13 01:13:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:13:34.429754 | orchestrator | 2025-04-13 01:13:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:13:34.431233 | orchestrator | 2025-04-13 01:13:34 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:13:34.432026 | orchestrator | 2025-04-13 01:13:34 | INFO  | Task 482798f6-bbbf-4de3-9a3a-0df683f19f9a is in state SUCCESS 2025-04-13 01:13:37.480826 | orchestrator | 2025-04-13 01:13:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:13:37.481010 | orchestrator | 2025-04-13 01:13:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:13:37.482377 | orchestrator | 2025-04-13 01:13:37 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:13:40.514007 | orchestrator | 2025-04-13 01:13:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:13:40.514279 | orchestrator | 2025-04-13 01:13:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:13:43.564837 | orchestrator | 2025-04-13 01:13:40 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:13:43.564962 | orchestrator | 2025-04-13 01:13:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:13:43.565002 | orchestrator | 2025-04-13 01:13:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:13:43.566572 | orchestrator | 2025-04-13 01:13:43 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:13:46.619539 | orchestrator | 2025-04-13 01:13:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:13:46.619684 | orchestrator | 2025-04-13 01:13:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:13:46.621485 | orchestrator | 2025-04-13 01:13:46 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:13:49.670337 | orchestrator | 2025-04-13 01:13:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:13:49.670521 | orchestrator | 2025-04-13 01:13:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:13:49.671825 | orchestrator | 2025-04-13 01:13:49 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:13:52.720743 | orchestrator | 2025-04-13 01:13:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:13:52.720891 | orchestrator | 2025-04-13 01:13:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:13:52.721789 | orchestrator | 2025-04-13 01:13:52 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:13:55.771630 | orchestrator | 2025-04-13 01:13:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:13:55.771784 | orchestrator | 2025-04-13 01:13:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:13:55.774922 | orchestrator | 2025-04-13 01:13:55 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:13:58.821965 | orchestrator | 2025-04-13 01:13:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:13:58.822193 | orchestrator | 2025-04-13 01:13:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:13:58.823594 | orchestrator | 2025-04-13 01:13:58 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:14:01.864109 | orchestrator | 2025-04-13 01:13:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:14:01.864218 | orchestrator | 2025-04-13 01:14:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:14:01.865544 | orchestrator | 2025-04-13 01:14:01 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:14:04.894339 | orchestrator | 2025-04-13 01:14:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:14:04.894493 | orchestrator | 2025-04-13 01:14:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:14:07.921865 | orchestrator | 2025-04-13 01:14:04 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:14:07.922125 | orchestrator | 2025-04-13 01:14:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:14:07.922173 | orchestrator | 2025-04-13 01:14:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:14:07.922663 | orchestrator | 2025-04-13 01:14:07 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:14:07.922995 | orchestrator | 2025-04-13 01:14:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:14:10.962673 | orchestrator | 2025-04-13 01:14:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:14:14.006564 | orchestrator | 2025-04-13 01:14:10 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:14:14.006720 | orchestrator | 2025-04-13 01:14:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:14:14.006770 | orchestrator | 2025-04-13 01:14:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:14:14.007525 | orchestrator | 2025-04-13 01:14:14 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:14:17.060693 | orchestrator | 2025-04-13 01:14:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:14:17.060843 | orchestrator | 2025-04-13 01:14:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:14:17.062281 | orchestrator | 2025-04-13 01:14:17 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:14:20.119943 | orchestrator | 2025-04-13 01:14:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:14:20.120146 | orchestrator | 2025-04-13 01:14:20 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:14:20.120704 | orchestrator | 2025-04-13 01:14:20 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:14:23.169198 | orchestrator | 2025-04-13 01:14:20 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:14:23.169341 | orchestrator | 2025-04-13 01:14:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:14:23.170906 | orchestrator | 2025-04-13 01:14:23 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:14:26.228056 | orchestrator | 2025-04-13 01:14:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:14:26.228235 | orchestrator | 2025-04-13 01:14:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:14:26.229862 | orchestrator | 2025-04-13 01:14:26 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:14:26.230191 | orchestrator | 2025-04-13 01:14:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:14:29.275523 | orchestrator | 2025-04-13 01:14:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:14:29.275905 | orchestrator | 2025-04-13 01:14:29 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:14:32.326200 | orchestrator | 2025-04-13 01:14:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:14:32.326366 | orchestrator | 2025-04-13 01:14:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:14:32.328184 | orchestrator | 2025-04-13 01:14:32 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:14:35.381264 | orchestrator | 2025-04-13 01:14:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:14:35.381422 | orchestrator | 2025-04-13 01:14:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:14:35.383028 | orchestrator | 2025-04-13 01:14:35 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:14:35.383210 | orchestrator | 2025-04-13 01:14:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:14:38.433024 | orchestrator | 2025-04-13 01:14:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:14:38.434462 | orchestrator | 2025-04-13 01:14:38 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:14:41.488605 | orchestrator | 2025-04-13 01:14:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:14:41.488716 | orchestrator | 2025-04-13 01:14:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:14:41.490600 | orchestrator | 2025-04-13 01:14:41 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state STARTED 2025-04-13 01:14:44.531734 | orchestrator | 2025-04-13 01:14:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:14:44.531897 | orchestrator | 2025-04-13 01:14:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:14:44.534436 | orchestrator | 2025-04-13 01:14:44 | INFO  | Task 5da1134c-9b07-437b-9261-48bfe7fe7516 is in state SUCCESS 2025-04-13 01:14:44.535591 | orchestrator | 2025-04-13 01:14:44.535641 | orchestrator | None 2025-04-13 01:14:44.535661 | orchestrator | 2025-04-13 01:14:44.535680 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-13 01:14:44.535718 | orchestrator | 2025-04-13 01:14:44.535737 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-04-13 01:14:44.535756 | orchestrator | Sunday 13 April 2025 01:06:27 +0000 (0:00:00.326) 0:00:00.326 ********** 2025-04-13 01:14:44.535774 | orchestrator | changed: [testbed-manager] 2025-04-13 01:14:44.535793 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:14:44.535812 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:14:44.535830 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:14:44.535849 | orchestrator | changed: [testbed-node-3] 2025-04-13 01:14:44.535867 | orchestrator | changed: [testbed-node-4] 2025-04-13 01:14:44.535885 | orchestrator | changed: [testbed-node-5] 2025-04-13 01:14:44.535903 | orchestrator | 2025-04-13 01:14:44.535921 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-13 01:14:44.535939 | orchestrator | Sunday 13 April 2025 01:06:28 +0000 (0:00:00.862) 0:00:01.188 ********** 2025-04-13 01:14:44.535958 | orchestrator | changed: [testbed-manager] 2025-04-13 01:14:44.535976 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:14:44.535993 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:14:44.536011 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:14:44.536029 | orchestrator | changed: [testbed-node-3] 2025-04-13 01:14:44.536046 | orchestrator | changed: [testbed-node-4] 2025-04-13 01:14:44.536069 | orchestrator | changed: [testbed-node-5] 2025-04-13 01:14:44.536111 | orchestrator | 2025-04-13 01:14:44.536131 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-13 01:14:44.536150 | orchestrator | Sunday 13 April 2025 01:06:29 +0000 (0:00:01.397) 0:00:02.586 ********** 2025-04-13 01:14:44.536169 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-04-13 01:14:44.536189 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-04-13 01:14:44.536208 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-04-13 01:14:44.536226 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-04-13 01:14:44.536335 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-04-13 01:14:44.536356 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-04-13 01:14:44.536375 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-04-13 01:14:44.536395 | orchestrator | 2025-04-13 01:14:44.536443 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-04-13 01:14:44.536464 | orchestrator | 2025-04-13 01:14:44.536483 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-04-13 01:14:44.536501 | orchestrator | Sunday 13 April 2025 01:06:30 +0000 (0:00:01.258) 0:00:03.845 ********** 2025-04-13 01:14:44.536520 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:14:44.536538 | orchestrator | 2025-04-13 01:14:44.536557 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-04-13 01:14:44.536576 | orchestrator | Sunday 13 April 2025 01:06:31 +0000 (0:00:00.734) 0:00:04.580 ********** 2025-04-13 01:14:44.536596 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-04-13 01:14:44.536614 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-04-13 01:14:44.536633 | orchestrator | 2025-04-13 01:14:44.536651 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-04-13 01:14:44.536669 | orchestrator | Sunday 13 April 2025 01:06:35 +0000 (0:00:04.423) 0:00:09.003 ********** 2025-04-13 01:14:44.536688 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-13 01:14:44.536706 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-13 01:14:44.536725 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:14:44.536743 | orchestrator | 2025-04-13 01:14:44.536761 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-04-13 01:14:44.536780 | orchestrator | Sunday 13 April 2025 01:06:40 +0000 (0:00:04.728) 0:00:13.732 ********** 2025-04-13 01:14:44.536797 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:14:44.536816 | orchestrator | 2025-04-13 01:14:44.536834 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-04-13 01:14:44.536852 | orchestrator | Sunday 13 April 2025 01:06:41 +0000 (0:00:00.912) 0:00:14.644 ********** 2025-04-13 01:14:44.536871 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:14:44.536889 | orchestrator | 2025-04-13 01:14:44.536907 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-04-13 01:14:44.536925 | orchestrator | Sunday 13 April 2025 01:06:43 +0000 (0:00:01.589) 0:00:16.234 ********** 2025-04-13 01:14:44.536944 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:14:44.536962 | orchestrator | 2025-04-13 01:14:44.536980 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-04-13 01:14:44.536998 | orchestrator | Sunday 13 April 2025 01:06:47 +0000 (0:00:04.401) 0:00:20.636 ********** 2025-04-13 01:14:44.537016 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.537034 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.537052 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.537070 | orchestrator | 2025-04-13 01:14:44.537114 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-04-13 01:14:44.537135 | orchestrator | Sunday 13 April 2025 01:06:48 +0000 (0:00:00.976) 0:00:21.612 ********** 2025-04-13 01:14:44.537153 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:14:44.537172 | orchestrator | 2025-04-13 01:14:44.537190 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-04-13 01:14:44.537207 | orchestrator | Sunday 13 April 2025 01:07:16 +0000 (0:00:27.734) 0:00:49.346 ********** 2025-04-13 01:14:44.537226 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:14:44.537243 | orchestrator | 2025-04-13 01:14:44.537261 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-04-13 01:14:44.537279 | orchestrator | Sunday 13 April 2025 01:07:29 +0000 (0:00:13.554) 0:01:02.901 ********** 2025-04-13 01:14:44.537297 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:14:44.537314 | orchestrator | 2025-04-13 01:14:44.537333 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-04-13 01:14:44.537350 | orchestrator | Sunday 13 April 2025 01:07:41 +0000 (0:00:11.554) 0:01:14.455 ********** 2025-04-13 01:14:44.537382 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:14:44.537400 | orchestrator | 2025-04-13 01:14:44.537416 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-04-13 01:14:44.537445 | orchestrator | Sunday 13 April 2025 01:07:42 +0000 (0:00:01.432) 0:01:15.889 ********** 2025-04-13 01:14:44.537462 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.537479 | orchestrator | 2025-04-13 01:14:44.537496 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-04-13 01:14:44.537512 | orchestrator | Sunday 13 April 2025 01:07:43 +0000 (0:00:00.821) 0:01:16.711 ********** 2025-04-13 01:14:44.537531 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:14:44.537550 | orchestrator | 2025-04-13 01:14:44.537569 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-04-13 01:14:44.537587 | orchestrator | Sunday 13 April 2025 01:07:44 +0000 (0:00:00.914) 0:01:17.626 ********** 2025-04-13 01:14:44.537605 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:14:44.537623 | orchestrator | 2025-04-13 01:14:44.537642 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-04-13 01:14:44.537659 | orchestrator | Sunday 13 April 2025 01:08:00 +0000 (0:00:15.794) 0:01:33.421 ********** 2025-04-13 01:14:44.537678 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.537696 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.537714 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.537732 | orchestrator | 2025-04-13 01:14:44.537749 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-04-13 01:14:44.537768 | orchestrator | 2025-04-13 01:14:44.537786 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-04-13 01:14:44.537804 | orchestrator | Sunday 13 April 2025 01:08:00 +0000 (0:00:00.299) 0:01:33.720 ********** 2025-04-13 01:14:44.537822 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:14:44.537840 | orchestrator | 2025-04-13 01:14:44.537858 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-04-13 01:14:44.537877 | orchestrator | Sunday 13 April 2025 01:08:01 +0000 (0:00:00.809) 0:01:34.529 ********** 2025-04-13 01:14:44.537894 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.537912 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.537930 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:14:44.537948 | orchestrator | 2025-04-13 01:14:44.537967 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-04-13 01:14:44.537984 | orchestrator | Sunday 13 April 2025 01:08:04 +0000 (0:00:02.521) 0:01:37.051 ********** 2025-04-13 01:14:44.538002 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.538074 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.538161 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:14:44.538181 | orchestrator | 2025-04-13 01:14:44.538200 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-04-13 01:14:44.538218 | orchestrator | Sunday 13 April 2025 01:08:06 +0000 (0:00:02.348) 0:01:39.399 ********** 2025-04-13 01:14:44.538236 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.538254 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.538272 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.538291 | orchestrator | 2025-04-13 01:14:44.538309 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-04-13 01:14:44.538326 | orchestrator | Sunday 13 April 2025 01:08:06 +0000 (0:00:00.502) 0:01:39.901 ********** 2025-04-13 01:14:44.538344 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-04-13 01:14:44.538361 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.538378 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-04-13 01:14:44.538395 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.538412 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-04-13 01:14:44.538428 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-04-13 01:14:44.538445 | orchestrator | 2025-04-13 01:14:44.538463 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-04-13 01:14:44.538493 | orchestrator | Sunday 13 April 2025 01:08:16 +0000 (0:00:09.304) 0:01:49.205 ********** 2025-04-13 01:14:44.538512 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.538530 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.538548 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.538566 | orchestrator | 2025-04-13 01:14:44.538584 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-04-13 01:14:44.538602 | orchestrator | Sunday 13 April 2025 01:08:16 +0000 (0:00:00.339) 0:01:49.545 ********** 2025-04-13 01:14:44.538620 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-04-13 01:14:44.538646 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.538666 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-04-13 01:14:44.538684 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.538702 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-04-13 01:14:44.538720 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.538738 | orchestrator | 2025-04-13 01:14:44.538756 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-04-13 01:14:44.538774 | orchestrator | Sunday 13 April 2025 01:08:17 +0000 (0:00:00.914) 0:01:50.459 ********** 2025-04-13 01:14:44.538793 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.538811 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.538829 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:14:44.538847 | orchestrator | 2025-04-13 01:14:44.538865 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-04-13 01:14:44.538883 | orchestrator | Sunday 13 April 2025 01:08:17 +0000 (0:00:00.447) 0:01:50.906 ********** 2025-04-13 01:14:44.538901 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.538919 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.538937 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:14:44.538954 | orchestrator | 2025-04-13 01:14:44.538973 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-04-13 01:14:44.538991 | orchestrator | Sunday 13 April 2025 01:08:18 +0000 (0:00:00.961) 0:01:51.868 ********** 2025-04-13 01:14:44.539009 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.539040 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.539058 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:14:44.539075 | orchestrator | 2025-04-13 01:14:44.539110 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-04-13 01:14:44.539127 | orchestrator | Sunday 13 April 2025 01:08:21 +0000 (0:00:02.377) 0:01:54.246 ********** 2025-04-13 01:14:44.539143 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.539160 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.539176 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:14:44.539193 | orchestrator | 2025-04-13 01:14:44.539209 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-04-13 01:14:44.539225 | orchestrator | Sunday 13 April 2025 01:08:39 +0000 (0:00:18.608) 0:02:12.854 ********** 2025-04-13 01:14:44.539242 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.539258 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.539277 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:14:44.539293 | orchestrator | 2025-04-13 01:14:44.539310 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-04-13 01:14:44.539327 | orchestrator | Sunday 13 April 2025 01:08:50 +0000 (0:00:11.058) 0:02:23.912 ********** 2025-04-13 01:14:44.539345 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:14:44.539371 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.539389 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.539408 | orchestrator | 2025-04-13 01:14:44.539426 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-04-13 01:14:44.539444 | orchestrator | Sunday 13 April 2025 01:08:52 +0000 (0:00:01.442) 0:02:25.354 ********** 2025-04-13 01:14:44.539462 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.539480 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.539509 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:14:44.539526 | orchestrator | 2025-04-13 01:14:44.539541 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-04-13 01:14:44.539557 | orchestrator | Sunday 13 April 2025 01:09:03 +0000 (0:00:11.058) 0:02:36.413 ********** 2025-04-13 01:14:44.539572 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.539588 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.539603 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.539618 | orchestrator | 2025-04-13 01:14:44.539633 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-04-13 01:14:44.539648 | orchestrator | Sunday 13 April 2025 01:09:04 +0000 (0:00:01.462) 0:02:37.875 ********** 2025-04-13 01:14:44.539664 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.539679 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.539694 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.539710 | orchestrator | 2025-04-13 01:14:44.539725 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-04-13 01:14:44.539740 | orchestrator | 2025-04-13 01:14:44.539756 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-04-13 01:14:44.539771 | orchestrator | Sunday 13 April 2025 01:09:05 +0000 (0:00:00.479) 0:02:38.355 ********** 2025-04-13 01:14:44.539786 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:14:44.539803 | orchestrator | 2025-04-13 01:14:44.539819 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-04-13 01:14:44.539834 | orchestrator | Sunday 13 April 2025 01:09:06 +0000 (0:00:00.927) 0:02:39.282 ********** 2025-04-13 01:14:44.539849 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-04-13 01:14:44.539864 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-04-13 01:14:44.539880 | orchestrator | 2025-04-13 01:14:44.539895 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-04-13 01:14:44.539911 | orchestrator | Sunday 13 April 2025 01:09:09 +0000 (0:00:03.171) 0:02:42.454 ********** 2025-04-13 01:14:44.539926 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-04-13 01:14:44.539942 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-04-13 01:14:44.539957 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-04-13 01:14:44.539973 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-04-13 01:14:44.539989 | orchestrator | 2025-04-13 01:14:44.540004 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-04-13 01:14:44.540019 | orchestrator | Sunday 13 April 2025 01:09:15 +0000 (0:00:06.454) 0:02:48.908 ********** 2025-04-13 01:14:44.540034 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-13 01:14:44.540049 | orchestrator | 2025-04-13 01:14:44.540064 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-04-13 01:14:44.540079 | orchestrator | Sunday 13 April 2025 01:09:19 +0000 (0:00:03.230) 0:02:52.138 ********** 2025-04-13 01:14:44.540107 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-13 01:14:44.540121 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-04-13 01:14:44.540135 | orchestrator | 2025-04-13 01:14:44.540149 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-04-13 01:14:44.540163 | orchestrator | Sunday 13 April 2025 01:09:23 +0000 (0:00:04.153) 0:02:56.292 ********** 2025-04-13 01:14:44.540177 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-13 01:14:44.540191 | orchestrator | 2025-04-13 01:14:44.540206 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-04-13 01:14:44.540227 | orchestrator | Sunday 13 April 2025 01:09:26 +0000 (0:00:03.262) 0:02:59.554 ********** 2025-04-13 01:14:44.540251 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-04-13 01:14:44.540267 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-04-13 01:14:44.540281 | orchestrator | 2025-04-13 01:14:44.540297 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-04-13 01:14:44.540322 | orchestrator | Sunday 13 April 2025 01:09:34 +0000 (0:00:07.947) 0:03:07.502 ********** 2025-04-13 01:14:44.540342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-13 01:14:44.540362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-13 01:14:44.540380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.540397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.540431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-13 01:14:44.540449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.540465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.540481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.540496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.540512 | orchestrator | 2025-04-13 01:14:44.540528 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-04-13 01:14:44.540551 | orchestrator | Sunday 13 April 2025 01:09:36 +0000 (0:00:01.616) 0:03:09.118 ********** 2025-04-13 01:14:44.540567 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.540582 | orchestrator | 2025-04-13 01:14:44.540597 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-04-13 01:14:44.540613 | orchestrator | Sunday 13 April 2025 01:09:36 +0000 (0:00:00.119) 0:03:09.238 ********** 2025-04-13 01:14:44.540628 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.540644 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.540659 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.540675 | orchestrator | 2025-04-13 01:14:44.540690 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-04-13 01:14:44.540706 | orchestrator | Sunday 13 April 2025 01:09:36 +0000 (0:00:00.449) 0:03:09.687 ********** 2025-04-13 01:14:44.540721 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-13 01:14:44.540736 | orchestrator | 2025-04-13 01:14:44.540757 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-04-13 01:14:44.540772 | orchestrator | Sunday 13 April 2025 01:09:37 +0000 (0:00:00.415) 0:03:10.102 ********** 2025-04-13 01:14:44.540788 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.540803 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.540818 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.540833 | orchestrator | 2025-04-13 01:14:44.540848 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-04-13 01:14:44.540864 | orchestrator | Sunday 13 April 2025 01:09:37 +0000 (0:00:00.286) 0:03:10.388 ********** 2025-04-13 01:14:44.540879 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:14:44.540894 | orchestrator | 2025-04-13 01:14:44.540908 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-04-13 01:14:44.540922 | orchestrator | Sunday 13 April 2025 01:09:38 +0000 (0:00:00.866) 0:03:11.255 ********** 2025-04-13 01:14:44.540938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-13 01:14:44.540953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-13 01:14:44.540983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-13 01:14:44.540999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.541014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.541030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.541046 | orchestrator | 2025-04-13 01:14:44.541062 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-04-13 01:14:44.541077 | orchestrator | Sunday 13 April 2025 01:09:40 +0000 (0:00:02.536) 0:03:13.791 ********** 2025-04-13 01:14:44.541121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-13 01:14:44.541154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.541176 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.541656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-13 01:14:44.541686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.541701 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.541716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-13 01:14:44.541743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.541757 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.541771 | orchestrator | 2025-04-13 01:14:44.541785 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-04-13 01:14:44.541799 | orchestrator | Sunday 13 April 2025 01:09:41 +0000 (0:00:00.807) 0:03:14.599 ********** 2025-04-13 01:14:44.541823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-13 01:14:44.541841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.541857 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.541873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-13 01:14:44.541898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.541914 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.541939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-13 01:14:44.541956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.541972 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.541988 | orchestrator | 2025-04-13 01:14:44.542004 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-04-13 01:14:44.542051 | orchestrator | Sunday 13 April 2025 01:09:42 +0000 (0:00:01.179) 0:03:15.778 ********** 2025-04-13 01:14:44.542069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-13 01:14:44.542145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-13 01:14:44.542174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-13 01:14:44.542192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.542216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.542233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.542249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.542273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.542290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.542306 | orchestrator | 2025-04-13 01:14:44.542322 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-04-13 01:14:44.542339 | orchestrator | Sunday 13 April 2025 01:09:45 +0000 (0:00:02.679) 0:03:18.458 ********** 2025-04-13 01:14:44.542394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-13 01:14:44.542421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-13 01:14:44.542446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-13 01:14:44.542476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.542493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.542517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.542533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.542550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.542572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.542589 | orchestrator | 2025-04-13 01:14:44.542606 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-04-13 01:14:44.542623 | orchestrator | Sunday 13 April 2025 01:09:51 +0000 (0:00:06.067) 0:03:24.525 ********** 2025-04-13 01:14:44.542651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-13 01:14:44.542681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.542700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.542716 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.542731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-13 01:14:44.542763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.542779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.542802 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.542818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-13 01:14:44.542834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.542850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.542866 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.542882 | orchestrator | 2025-04-13 01:14:44.542897 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-04-13 01:14:44.542911 | orchestrator | Sunday 13 April 2025 01:09:52 +0000 (0:00:00.810) 0:03:25.336 ********** 2025-04-13 01:14:44.543384 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:14:44.543406 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:14:44.543421 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:14:44.543437 | orchestrator | 2025-04-13 01:14:44.543454 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-04-13 01:14:44.543469 | orchestrator | Sunday 13 April 2025 01:09:54 +0000 (0:00:01.707) 0:03:27.043 ********** 2025-04-13 01:14:44.543493 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.543508 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.543523 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.543539 | orchestrator | 2025-04-13 01:14:44.543554 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-04-13 01:14:44.543569 | orchestrator | Sunday 13 April 2025 01:09:54 +0000 (0:00:00.470) 0:03:27.514 ********** 2025-04-13 01:14:44.543618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-13 01:14:44.543636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-13 01:14:44.543653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-13 01:14:44.543697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.543722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.543738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.543754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.543770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.543786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.543802 | orchestrator | 2025-04-13 01:14:44.543817 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-04-13 01:14:44.543832 | orchestrator | Sunday 13 April 2025 01:09:56 +0000 (0:00:02.132) 0:03:29.646 ********** 2025-04-13 01:14:44.543847 | orchestrator | 2025-04-13 01:14:44.543863 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-04-13 01:14:44.543878 | orchestrator | Sunday 13 April 2025 01:09:56 +0000 (0:00:00.280) 0:03:29.927 ********** 2025-04-13 01:14:44.543891 | orchestrator | 2025-04-13 01:14:44.543914 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-04-13 01:14:44.543930 | orchestrator | Sunday 13 April 2025 01:09:56 +0000 (0:00:00.106) 0:03:30.033 ********** 2025-04-13 01:14:44.543947 | orchestrator | 2025-04-13 01:14:44.543968 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-04-13 01:14:44.543985 | orchestrator | Sunday 13 April 2025 01:09:57 +0000 (0:00:00.259) 0:03:30.292 ********** 2025-04-13 01:14:44.544002 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:14:44.544018 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:14:44.544034 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:14:44.544050 | orchestrator | 2025-04-13 01:14:44.544065 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-04-13 01:14:44.544627 | orchestrator | Sunday 13 April 2025 01:10:13 +0000 (0:00:16.359) 0:03:46.652 ********** 2025-04-13 01:14:44.544644 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:14:44.544659 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:14:44.544673 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:14:44.544688 | orchestrator | 2025-04-13 01:14:44.544702 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-04-13 01:14:44.544716 | orchestrator | 2025-04-13 01:14:44.544739 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-13 01:14:44.544753 | orchestrator | Sunday 13 April 2025 01:10:24 +0000 (0:00:10.648) 0:03:57.300 ********** 2025-04-13 01:14:44.544768 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:14:44.544784 | orchestrator | 2025-04-13 01:14:44.544798 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-13 01:14:44.544812 | orchestrator | Sunday 13 April 2025 01:10:25 +0000 (0:00:01.377) 0:03:58.677 ********** 2025-04-13 01:14:44.544826 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:14:44.544841 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:14:44.544855 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:14:44.544868 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.544883 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.544897 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.544910 | orchestrator | 2025-04-13 01:14:44.544925 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-04-13 01:14:44.544939 | orchestrator | Sunday 13 April 2025 01:10:26 +0000 (0:00:00.721) 0:03:59.399 ********** 2025-04-13 01:14:44.544952 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.544966 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.544980 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.544993 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 01:14:44.545008 | orchestrator | 2025-04-13 01:14:44.545022 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-04-13 01:14:44.545036 | orchestrator | Sunday 13 April 2025 01:10:27 +0000 (0:00:01.211) 0:04:00.610 ********** 2025-04-13 01:14:44.545051 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-04-13 01:14:44.545065 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-04-13 01:14:44.545080 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-04-13 01:14:44.545155 | orchestrator | 2025-04-13 01:14:44.545170 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-04-13 01:14:44.545185 | orchestrator | Sunday 13 April 2025 01:10:28 +0000 (0:00:00.652) 0:04:01.263 ********** 2025-04-13 01:14:44.545201 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-04-13 01:14:44.545546 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-04-13 01:14:44.545572 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-04-13 01:14:44.545587 | orchestrator | 2025-04-13 01:14:44.545601 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-04-13 01:14:44.545629 | orchestrator | Sunday 13 April 2025 01:10:29 +0000 (0:00:01.323) 0:04:02.587 ********** 2025-04-13 01:14:44.545643 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-04-13 01:14:44.545658 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:14:44.545672 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-04-13 01:14:44.545687 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:14:44.545709 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-04-13 01:14:44.545723 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:14:44.545738 | orchestrator | 2025-04-13 01:14:44.545752 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-04-13 01:14:44.545766 | orchestrator | Sunday 13 April 2025 01:10:30 +0000 (0:00:00.839) 0:04:03.426 ********** 2025-04-13 01:14:44.545780 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-13 01:14:44.545795 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-13 01:14:44.545810 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.545824 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-13 01:14:44.545839 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-13 01:14:44.545853 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-04-13 01:14:44.545867 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-04-13 01:14:44.545886 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-04-13 01:14:44.545901 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.545915 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-13 01:14:44.545928 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-13 01:14:44.545943 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.545956 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-04-13 01:14:44.545968 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-04-13 01:14:44.545982 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-04-13 01:14:44.545995 | orchestrator | 2025-04-13 01:14:44.546221 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-04-13 01:14:44.546252 | orchestrator | Sunday 13 April 2025 01:10:31 +0000 (0:00:00.995) 0:04:04.421 ********** 2025-04-13 01:14:44.546266 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.546281 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.546295 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.546309 | orchestrator | changed: [testbed-node-3] 2025-04-13 01:14:44.546323 | orchestrator | changed: [testbed-node-4] 2025-04-13 01:14:44.546336 | orchestrator | changed: [testbed-node-5] 2025-04-13 01:14:44.546350 | orchestrator | 2025-04-13 01:14:44.546365 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-04-13 01:14:44.546379 | orchestrator | Sunday 13 April 2025 01:10:32 +0000 (0:00:01.138) 0:04:05.559 ********** 2025-04-13 01:14:44.546393 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.546449 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.546757 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.546779 | orchestrator | changed: [testbed-node-3] 2025-04-13 01:14:44.546794 | orchestrator | changed: [testbed-node-4] 2025-04-13 01:14:44.546808 | orchestrator | changed: [testbed-node-5] 2025-04-13 01:14:44.546822 | orchestrator | 2025-04-13 01:14:44.546837 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-04-13 01:14:44.546852 | orchestrator | Sunday 13 April 2025 01:10:34 +0000 (0:00:01.845) 0:04:07.405 ********** 2025-04-13 01:14:44.546867 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-13 01:14:44.546898 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-13 01:14:44.546934 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-13 01:14:44.546982 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-13 01:14:44.546999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.547014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.547039 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.547055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.547080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.547117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.547159 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-13 01:14:44.547176 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.547199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.547215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.547230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.547256 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-13 01:14:44.547272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.547311 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.547328 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.547350 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.547365 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.547942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.547974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.548070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.548143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-13 01:14:44.548172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.548186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.548202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.548215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.548329 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.548350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.548376 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.548391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.548406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-13 01:14:44.548421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.548440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.548521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.548557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-13 01:14:44.548570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.548583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.548596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.548610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.548680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.548716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.548730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.548743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.548757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.548770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.548783 | orchestrator | 2025-04-13 01:14:44.548795 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-13 01:14:44.548809 | orchestrator | Sunday 13 April 2025 01:10:37 +0000 (0:00:02.833) 0:04:10.238 ********** 2025-04-13 01:14:44.548822 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-13 01:14:44.548840 | orchestrator | 2025-04-13 01:14:44.548851 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-04-13 01:14:44.548863 | orchestrator | Sunday 13 April 2025 01:10:38 +0000 (0:00:01.421) 0:04:11.660 ********** 2025-04-13 01:14:44.548976 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-13 01:14:44.548998 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-13 01:14:44.549011 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-13 01:14:44.549025 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-13 01:14:44.549048 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-13 01:14:44.549153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-13 01:14:44.549171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-13 01:14:44.549185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-13 01:14:44.549198 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-13 01:14:44.549211 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.549236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.549316 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.549333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.549346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.549359 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.549371 | orchestrator | 2025-04-13 01:14:44.549385 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-04-13 01:14:44.549397 | orchestrator | Sunday 13 April 2025 01:10:42 +0000 (0:00:03.873) 0:04:15.533 ********** 2025-04-13 01:14:44.549426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.549449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.549519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.549535 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:14:44.549549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.549560 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.549583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.549637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.549652 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:14:44.549739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.549758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.549772 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:14:44.549785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.549798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.549825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.549846 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.549860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.549873 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.549946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.549963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.549976 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.549989 | orchestrator | 2025-04-13 01:14:44.550001 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-04-13 01:14:44.550014 | orchestrator | Sunday 13 April 2025 01:10:44 +0000 (0:00:02.023) 0:04:17.557 ********** 2025-04-13 01:14:44.550055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.550081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.550123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.550136 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:14:44.550214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.550231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.550244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.550257 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:14:44.550284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.550304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.550316 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.550347 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:14:44.550390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.550406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.550419 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.550431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.550445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.550478 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.550491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.550504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.550517 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.550529 | orchestrator | 2025-04-13 01:14:44.550542 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-13 01:14:44.550554 | orchestrator | Sunday 13 April 2025 01:10:46 +0000 (0:00:02.390) 0:04:19.947 ********** 2025-04-13 01:14:44.550567 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.550580 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.550593 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.550605 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-13 01:14:44.550618 | orchestrator | 2025-04-13 01:14:44.550631 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-04-13 01:14:44.550643 | orchestrator | Sunday 13 April 2025 01:10:48 +0000 (0:00:01.141) 0:04:21.089 ********** 2025-04-13 01:14:44.550681 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-13 01:14:44.550695 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-13 01:14:44.550708 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-13 01:14:44.550720 | orchestrator | 2025-04-13 01:14:44.550733 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-04-13 01:14:44.550745 | orchestrator | Sunday 13 April 2025 01:10:48 +0000 (0:00:00.798) 0:04:21.887 ********** 2025-04-13 01:14:44.550758 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-13 01:14:44.550771 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-13 01:14:44.550783 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-13 01:14:44.550795 | orchestrator | 2025-04-13 01:14:44.550807 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-04-13 01:14:44.550819 | orchestrator | Sunday 13 April 2025 01:10:49 +0000 (0:00:00.811) 0:04:22.699 ********** 2025-04-13 01:14:44.550848 | orchestrator | ok: [testbed-node-3] 2025-04-13 01:14:44.550861 | orchestrator | ok: [testbed-node-4] 2025-04-13 01:14:44.550874 | orchestrator | ok: [testbed-node-5] 2025-04-13 01:14:44.550888 | orchestrator | 2025-04-13 01:14:44.550902 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-04-13 01:14:44.550917 | orchestrator | Sunday 13 April 2025 01:10:50 +0000 (0:00:00.837) 0:04:23.536 ********** 2025-04-13 01:14:44.550931 | orchestrator | ok: [testbed-node-3] 2025-04-13 01:14:44.550947 | orchestrator | ok: [testbed-node-4] 2025-04-13 01:14:44.550970 | orchestrator | ok: [testbed-node-5] 2025-04-13 01:14:44.550984 | orchestrator | 2025-04-13 01:14:44.550998 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-04-13 01:14:44.551012 | orchestrator | Sunday 13 April 2025 01:10:50 +0000 (0:00:00.299) 0:04:23.836 ********** 2025-04-13 01:14:44.551027 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-04-13 01:14:44.551047 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-04-13 01:14:44.551062 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-04-13 01:14:44.551076 | orchestrator | 2025-04-13 01:14:44.551111 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-04-13 01:14:44.551124 | orchestrator | Sunday 13 April 2025 01:10:52 +0000 (0:00:01.354) 0:04:25.190 ********** 2025-04-13 01:14:44.551137 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-04-13 01:14:44.551151 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-04-13 01:14:44.551164 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-04-13 01:14:44.551177 | orchestrator | 2025-04-13 01:14:44.551190 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-04-13 01:14:44.551202 | orchestrator | Sunday 13 April 2025 01:10:53 +0000 (0:00:01.384) 0:04:26.575 ********** 2025-04-13 01:14:44.551214 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-04-13 01:14:44.551227 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-04-13 01:14:44.551239 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-04-13 01:14:44.551251 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-04-13 01:14:44.551268 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-04-13 01:14:44.551280 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-04-13 01:14:44.551291 | orchestrator | 2025-04-13 01:14:44.551302 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-04-13 01:14:44.551313 | orchestrator | Sunday 13 April 2025 01:10:58 +0000 (0:00:05.306) 0:04:31.881 ********** 2025-04-13 01:14:44.551324 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:14:44.551335 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:14:44.551347 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:14:44.551357 | orchestrator | 2025-04-13 01:14:44.551369 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-04-13 01:14:44.551380 | orchestrator | Sunday 13 April 2025 01:10:59 +0000 (0:00:00.300) 0:04:32.182 ********** 2025-04-13 01:14:44.551391 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:14:44.551402 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:14:44.551413 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:14:44.551425 | orchestrator | 2025-04-13 01:14:44.551436 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-04-13 01:14:44.551447 | orchestrator | Sunday 13 April 2025 01:10:59 +0000 (0:00:00.485) 0:04:32.668 ********** 2025-04-13 01:14:44.551458 | orchestrator | changed: [testbed-node-3] 2025-04-13 01:14:44.551469 | orchestrator | changed: [testbed-node-4] 2025-04-13 01:14:44.551481 | orchestrator | changed: [testbed-node-5] 2025-04-13 01:14:44.551503 | orchestrator | 2025-04-13 01:14:44.551514 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-04-13 01:14:44.551525 | orchestrator | Sunday 13 April 2025 01:11:01 +0000 (0:00:01.528) 0:04:34.197 ********** 2025-04-13 01:14:44.551537 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-04-13 01:14:44.551549 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-04-13 01:14:44.551565 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-04-13 01:14:44.551575 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-04-13 01:14:44.551592 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-04-13 01:14:44.551603 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-04-13 01:14:44.551613 | orchestrator | 2025-04-13 01:14:44.551623 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-04-13 01:14:44.551670 | orchestrator | Sunday 13 April 2025 01:11:04 +0000 (0:00:03.363) 0:04:37.561 ********** 2025-04-13 01:14:44.551682 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-13 01:14:44.551693 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-13 01:14:44.551705 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-13 01:14:44.551716 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-13 01:14:44.551727 | orchestrator | changed: [testbed-node-3] 2025-04-13 01:14:44.551739 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-13 01:14:44.551751 | orchestrator | changed: [testbed-node-4] 2025-04-13 01:14:44.551762 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-13 01:14:44.551773 | orchestrator | changed: [testbed-node-5] 2025-04-13 01:14:44.551785 | orchestrator | 2025-04-13 01:14:44.551796 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-04-13 01:14:44.551807 | orchestrator | Sunday 13 April 2025 01:11:07 +0000 (0:00:03.269) 0:04:40.830 ********** 2025-04-13 01:14:44.551818 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:14:44.551829 | orchestrator | 2025-04-13 01:14:44.551840 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-04-13 01:14:44.551851 | orchestrator | Sunday 13 April 2025 01:11:07 +0000 (0:00:00.117) 0:04:40.948 ********** 2025-04-13 01:14:44.551862 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:14:44.551873 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:14:44.551884 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:14:44.551895 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.551906 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.551918 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.551928 | orchestrator | 2025-04-13 01:14:44.551939 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-04-13 01:14:44.551950 | orchestrator | Sunday 13 April 2025 01:11:08 +0000 (0:00:00.902) 0:04:41.851 ********** 2025-04-13 01:14:44.551961 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-13 01:14:44.551971 | orchestrator | 2025-04-13 01:14:44.551982 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-04-13 01:14:44.551993 | orchestrator | Sunday 13 April 2025 01:11:09 +0000 (0:00:00.402) 0:04:42.253 ********** 2025-04-13 01:14:44.552004 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:14:44.552015 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:14:44.552027 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:14:44.552037 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.552049 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.552060 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.552071 | orchestrator | 2025-04-13 01:14:44.552082 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-04-13 01:14:44.552143 | orchestrator | Sunday 13 April 2025 01:11:10 +0000 (0:00:00.893) 0:04:43.146 ********** 2025-04-13 01:14:44.552155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.552176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.552233 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-13 01:14:44.552245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.552255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.552266 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-13 01:14:44.552294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.552306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.552342 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-13 01:14:44.552355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-13 01:14:44.552367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.552379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.552398 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-13 01:14:44.552410 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.552454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-13 01:14:44.552467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.552479 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.552491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.552509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.552520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.552532 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-13 01:14:44.552564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.552577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.552588 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.552608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.552630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-13 01:14:44.552642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.552654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.552665 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-13 01:14:44.552700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.552713 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.552724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.552755 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.552768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.552780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.552813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.552824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.552845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.552865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.552876 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.552887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.552925 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.552947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.552959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.552978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.552990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.553001 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.553036 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.553049 | orchestrator | 2025-04-13 01:14:44.553060 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-04-13 01:14:44.553071 | orchestrator | Sunday 13 April 2025 01:11:14 +0000 (0:00:03.944) 0:04:47.091 ********** 2025-04-13 01:14:44.553114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.553135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.553148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.553160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.553196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.553209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.553221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.553248 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.553260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.553271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.553281 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.553291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.553325 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.553350 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.553361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.553371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.553382 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.553392 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.553422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.553446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.553462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.553474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.553486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.553497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.553531 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.553550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.553571 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.553583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.553595 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.553628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.553648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-13 01:14:44.553660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.553672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.553695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-13 01:14:44.553707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.553719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.553752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-13 01:14:44.553771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.553783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.553795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.553814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.553825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.553837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.553875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.553888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.553899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.553918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.553929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.553938 | orchestrator | 2025-04-13 01:14:44.553949 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-04-13 01:14:44.553958 | orchestrator | Sunday 13 April 2025 01:11:21 +0000 (0:00:07.535) 0:04:54.627 ********** 2025-04-13 01:14:44.553968 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:14:44.553978 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:14:44.553987 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:14:44.554003 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.554012 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.554052 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.554063 | orchestrator | 2025-04-13 01:14:44.554074 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-04-13 01:14:44.554100 | orchestrator | Sunday 13 April 2025 01:11:23 +0000 (0:00:01.795) 0:04:56.422 ********** 2025-04-13 01:14:44.554111 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-04-13 01:14:44.554121 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-04-13 01:14:44.554137 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-04-13 01:14:44.554148 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-04-13 01:14:44.554184 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-04-13 01:14:44.554196 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.554207 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-04-13 01:14:44.554217 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.554228 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-04-13 01:14:44.554238 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.554249 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-04-13 01:14:44.554259 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-04-13 01:14:44.554269 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-04-13 01:14:44.554280 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-04-13 01:14:44.554290 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-04-13 01:14:44.554301 | orchestrator | 2025-04-13 01:14:44.554311 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-04-13 01:14:44.554322 | orchestrator | Sunday 13 April 2025 01:11:29 +0000 (0:00:05.617) 0:05:02.040 ********** 2025-04-13 01:14:44.554332 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:14:44.554342 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:14:44.554353 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:14:44.554363 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.554373 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.554383 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.554393 | orchestrator | 2025-04-13 01:14:44.554404 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-04-13 01:14:44.554414 | orchestrator | Sunday 13 April 2025 01:11:29 +0000 (0:00:00.891) 0:05:02.932 ********** 2025-04-13 01:14:44.554425 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-04-13 01:14:44.554435 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-04-13 01:14:44.554446 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-04-13 01:14:44.554456 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-04-13 01:14:44.554467 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-04-13 01:14:44.554477 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-04-13 01:14:44.554488 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-04-13 01:14:44.554505 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-04-13 01:14:44.554515 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-04-13 01:14:44.554524 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-04-13 01:14:44.554534 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.554543 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-04-13 01:14:44.554552 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.554566 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-04-13 01:14:44.554575 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.554586 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-04-13 01:14:44.554595 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-04-13 01:14:44.554605 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-04-13 01:14:44.554615 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-04-13 01:14:44.554625 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-04-13 01:14:44.554636 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-04-13 01:14:44.554646 | orchestrator | 2025-04-13 01:14:44.554657 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-04-13 01:14:44.554667 | orchestrator | Sunday 13 April 2025 01:11:37 +0000 (0:00:07.383) 0:05:10.315 ********** 2025-04-13 01:14:44.554678 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-04-13 01:14:44.554689 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-04-13 01:14:44.554720 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-04-13 01:14:44.554732 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-04-13 01:14:44.554743 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-04-13 01:14:44.554754 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-13 01:14:44.554764 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-04-13 01:14:44.554775 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-13 01:14:44.554785 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-13 01:14:44.554796 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-13 01:14:44.554806 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-04-13 01:14:44.554817 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.554827 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-13 01:14:44.554838 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-13 01:14:44.554849 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-04-13 01:14:44.554859 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.554869 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-04-13 01:14:44.554880 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.554896 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-13 01:14:44.554906 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-13 01:14:44.554917 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-13 01:14:44.554927 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-13 01:14:44.554937 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-13 01:14:44.554947 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-13 01:14:44.554957 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-13 01:14:44.554968 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-13 01:14:44.554978 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-13 01:14:44.554989 | orchestrator | 2025-04-13 01:14:44.554999 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-04-13 01:14:44.555010 | orchestrator | Sunday 13 April 2025 01:11:47 +0000 (0:00:10.435) 0:05:20.751 ********** 2025-04-13 01:14:44.555020 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:14:44.555031 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:14:44.555041 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:14:44.555056 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.555066 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.555077 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.555133 | orchestrator | 2025-04-13 01:14:44.555143 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-04-13 01:14:44.555153 | orchestrator | Sunday 13 April 2025 01:11:48 +0000 (0:00:00.738) 0:05:21.489 ********** 2025-04-13 01:14:44.555162 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:14:44.555171 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:14:44.555182 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:14:44.555192 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.555203 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.555213 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.555229 | orchestrator | 2025-04-13 01:14:44.555240 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-04-13 01:14:44.555250 | orchestrator | Sunday 13 April 2025 01:11:49 +0000 (0:00:00.937) 0:05:22.426 ********** 2025-04-13 01:14:44.555261 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.555271 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.555281 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.555292 | orchestrator | changed: [testbed-node-3] 2025-04-13 01:14:44.555302 | orchestrator | changed: [testbed-node-5] 2025-04-13 01:14:44.555313 | orchestrator | changed: [testbed-node-4] 2025-04-13 01:14:44.555323 | orchestrator | 2025-04-13 01:14:44.555337 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-04-13 01:14:44.555348 | orchestrator | Sunday 13 April 2025 01:11:52 +0000 (0:00:02.768) 0:05:25.195 ********** 2025-04-13 01:14:44.555387 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.555408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.555434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.555446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.555458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.555468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.555479 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.555513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.555539 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:14:44.555549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.555567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.555579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.555587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.555595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.555625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.555640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.555649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.555658 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:14:44.555676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.555686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.555696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.555731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.555746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.555756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.555773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.555784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.555793 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:14:44.555804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.555822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.555832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.555842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.555852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.555869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.555879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.555894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.555910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.555920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.555930 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.555946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.555957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.555967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.555982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.555997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.556007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.556016 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.556026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.556042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.556052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.556069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.556081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.556109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.556118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.556127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.556135 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.556140 | orchestrator | 2025-04-13 01:14:44.556146 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-04-13 01:14:44.556151 | orchestrator | Sunday 13 April 2025 01:11:54 +0000 (0:00:02.034) 0:05:27.229 ********** 2025-04-13 01:14:44.556157 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-04-13 01:14:44.556162 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-04-13 01:14:44.556167 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:14:44.556177 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-04-13 01:14:44.556183 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-04-13 01:14:44.556188 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:14:44.556193 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-04-13 01:14:44.556199 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-04-13 01:14:44.556204 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:14:44.556209 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-04-13 01:14:44.556214 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-04-13 01:14:44.556220 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.556225 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-04-13 01:14:44.556230 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-04-13 01:14:44.556235 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.556241 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-04-13 01:14:44.556246 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-04-13 01:14:44.556251 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.556256 | orchestrator | 2025-04-13 01:14:44.556262 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-04-13 01:14:44.556267 | orchestrator | Sunday 13 April 2025 01:11:55 +0000 (0:00:00.860) 0:05:28.090 ********** 2025-04-13 01:14:44.556286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.556292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.556298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.556303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.556313 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-13 01:14:44.556327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-13 01:14:44.556333 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-13 01:14:44.556339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-13 01:14:44.556349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-13 01:14:44.556354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-13 01:14:44.556360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.556368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-13 01:14:44.556379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.556385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.556390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.556400 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-13 01:14:44.556405 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.556411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.556416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.556425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.556435 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-13 01:14:44.556441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.556450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.556455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.556461 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-13 01:14:44.556466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.556474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.556485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.556497 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.556503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.556508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-13 01:14:44.556513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-13 01:14:44.556519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-13 01:14:44.556527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.556533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.556541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.556552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.556558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.556563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.556577 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.556583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.556592 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.556598 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.556603 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.556609 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.556625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-13 01:14:44.556635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.556649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-13 01:14:44.556655 | orchestrator | 2025-04-13 01:14:44.556660 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-13 01:14:44.556665 | orchestrator | Sunday 13 April 2025 01:11:58 +0000 (0:00:03.508) 0:05:31.599 ********** 2025-04-13 01:14:44.556671 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:14:44.556676 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:14:44.556681 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:14:44.556686 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.556692 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.556697 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.556702 | orchestrator | 2025-04-13 01:14:44.556707 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-13 01:14:44.556713 | orchestrator | Sunday 13 April 2025 01:11:59 +0000 (0:00:00.725) 0:05:32.324 ********** 2025-04-13 01:14:44.556718 | orchestrator | 2025-04-13 01:14:44.556723 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-13 01:14:44.556728 | orchestrator | Sunday 13 April 2025 01:11:59 +0000 (0:00:00.301) 0:05:32.625 ********** 2025-04-13 01:14:44.556734 | orchestrator | 2025-04-13 01:14:44.556739 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-13 01:14:44.556746 | orchestrator | Sunday 13 April 2025 01:11:59 +0000 (0:00:00.107) 0:05:32.733 ********** 2025-04-13 01:14:44.556754 | orchestrator | 2025-04-13 01:14:44.556763 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-13 01:14:44.556771 | orchestrator | Sunday 13 April 2025 01:12:00 +0000 (0:00:00.301) 0:05:33.035 ********** 2025-04-13 01:14:44.556777 | orchestrator | 2025-04-13 01:14:44.556782 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-13 01:14:44.556787 | orchestrator | Sunday 13 April 2025 01:12:00 +0000 (0:00:00.118) 0:05:33.153 ********** 2025-04-13 01:14:44.556792 | orchestrator | 2025-04-13 01:14:44.556797 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-13 01:14:44.556802 | orchestrator | Sunday 13 April 2025 01:12:00 +0000 (0:00:00.302) 0:05:33.456 ********** 2025-04-13 01:14:44.556808 | orchestrator | 2025-04-13 01:14:44.556813 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-04-13 01:14:44.556818 | orchestrator | Sunday 13 April 2025 01:12:00 +0000 (0:00:00.111) 0:05:33.568 ********** 2025-04-13 01:14:44.556823 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:14:44.556829 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:14:44.556834 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:14:44.556839 | orchestrator | 2025-04-13 01:14:44.556848 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-04-13 01:14:44.556854 | orchestrator | Sunday 13 April 2025 01:12:12 +0000 (0:00:12.372) 0:05:45.941 ********** 2025-04-13 01:14:44.556859 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:14:44.556864 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:14:44.556869 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:14:44.556874 | orchestrator | 2025-04-13 01:14:44.556880 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-04-13 01:14:44.556885 | orchestrator | Sunday 13 April 2025 01:12:29 +0000 (0:00:16.169) 0:06:02.110 ********** 2025-04-13 01:14:44.556892 | orchestrator | changed: [testbed-node-4] 2025-04-13 01:14:44.556898 | orchestrator | changed: [testbed-node-3] 2025-04-13 01:14:44.556903 | orchestrator | changed: [testbed-node-5] 2025-04-13 01:14:44.556909 | orchestrator | 2025-04-13 01:14:44.556914 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-04-13 01:14:44.556919 | orchestrator | Sunday 13 April 2025 01:12:50 +0000 (0:00:21.023) 0:06:23.133 ********** 2025-04-13 01:14:44.556924 | orchestrator | changed: [testbed-node-3] 2025-04-13 01:14:44.556929 | orchestrator | changed: [testbed-node-5] 2025-04-13 01:14:44.556935 | orchestrator | changed: [testbed-node-4] 2025-04-13 01:14:44.556940 | orchestrator | 2025-04-13 01:14:44.556945 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-04-13 01:14:44.556950 | orchestrator | Sunday 13 April 2025 01:13:14 +0000 (0:00:24.173) 0:06:47.306 ********** 2025-04-13 01:14:44.556955 | orchestrator | changed: [testbed-node-3] 2025-04-13 01:14:44.556960 | orchestrator | changed: [testbed-node-5] 2025-04-13 01:14:44.556966 | orchestrator | changed: [testbed-node-4] 2025-04-13 01:14:44.556971 | orchestrator | 2025-04-13 01:14:44.556976 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-04-13 01:14:44.556981 | orchestrator | Sunday 13 April 2025 01:13:15 +0000 (0:00:01.135) 0:06:48.441 ********** 2025-04-13 01:14:44.556986 | orchestrator | changed: [testbed-node-3] 2025-04-13 01:14:44.556991 | orchestrator | changed: [testbed-node-4] 2025-04-13 01:14:44.556997 | orchestrator | changed: [testbed-node-5] 2025-04-13 01:14:44.557002 | orchestrator | 2025-04-13 01:14:44.557010 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-04-13 01:14:44.557015 | orchestrator | Sunday 13 April 2025 01:13:16 +0000 (0:00:00.768) 0:06:49.210 ********** 2025-04-13 01:14:44.557020 | orchestrator | changed: [testbed-node-5] 2025-04-13 01:14:44.557025 | orchestrator | changed: [testbed-node-3] 2025-04-13 01:14:44.557031 | orchestrator | changed: [testbed-node-4] 2025-04-13 01:14:44.557036 | orchestrator | 2025-04-13 01:14:44.557041 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-04-13 01:14:44.557047 | orchestrator | Sunday 13 April 2025 01:13:37 +0000 (0:00:21.200) 0:07:10.411 ********** 2025-04-13 01:14:44.557052 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:14:44.557057 | orchestrator | 2025-04-13 01:14:44.557062 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-04-13 01:14:44.557067 | orchestrator | Sunday 13 April 2025 01:13:37 +0000 (0:00:00.144) 0:07:10.555 ********** 2025-04-13 01:14:44.557073 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:14:44.557078 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.557099 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:14:44.557106 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.557111 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.557119 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-04-13 01:14:44.557125 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-13 01:14:44.557130 | orchestrator | 2025-04-13 01:14:44.557135 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-04-13 01:14:44.557141 | orchestrator | Sunday 13 April 2025 01:13:59 +0000 (0:00:21.996) 0:07:32.551 ********** 2025-04-13 01:14:44.557149 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:14:44.557155 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.557160 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.557165 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.557170 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:14:44.557175 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:14:44.557181 | orchestrator | 2025-04-13 01:14:44.557186 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-04-13 01:14:44.557191 | orchestrator | Sunday 13 April 2025 01:14:09 +0000 (0:00:10.055) 0:07:42.607 ********** 2025-04-13 01:14:44.557197 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:14:44.557202 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:14:44.557207 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.557212 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.557217 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.557223 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-04-13 01:14:44.557228 | orchestrator | 2025-04-13 01:14:44.557233 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-04-13 01:14:44.557238 | orchestrator | Sunday 13 April 2025 01:14:12 +0000 (0:00:03.125) 0:07:45.732 ********** 2025-04-13 01:14:44.557243 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-13 01:14:44.557249 | orchestrator | 2025-04-13 01:14:44.557254 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-04-13 01:14:44.557259 | orchestrator | Sunday 13 April 2025 01:14:23 +0000 (0:00:10.630) 0:07:56.363 ********** 2025-04-13 01:14:44.557264 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-13 01:14:44.557269 | orchestrator | 2025-04-13 01:14:44.557275 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-04-13 01:14:44.557280 | orchestrator | Sunday 13 April 2025 01:14:24 +0000 (0:00:01.164) 0:07:57.527 ********** 2025-04-13 01:14:44.557285 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:14:44.557290 | orchestrator | 2025-04-13 01:14:44.557295 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-04-13 01:14:44.557301 | orchestrator | Sunday 13 April 2025 01:14:25 +0000 (0:00:01.467) 0:07:58.995 ********** 2025-04-13 01:14:44.557306 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-13 01:14:44.557311 | orchestrator | 2025-04-13 01:14:44.557316 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-04-13 01:14:44.557321 | orchestrator | Sunday 13 April 2025 01:14:35 +0000 (0:00:09.121) 0:08:08.117 ********** 2025-04-13 01:14:44.557327 | orchestrator | ok: [testbed-node-3] 2025-04-13 01:14:44.557332 | orchestrator | ok: [testbed-node-4] 2025-04-13 01:14:44.557337 | orchestrator | ok: [testbed-node-5] 2025-04-13 01:14:44.557342 | orchestrator | ok: [testbed-node-0] 2025-04-13 01:14:44.557347 | orchestrator | ok: [testbed-node-1] 2025-04-13 01:14:44.557352 | orchestrator | ok: [testbed-node-2] 2025-04-13 01:14:44.557358 | orchestrator | 2025-04-13 01:14:44.557365 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-04-13 01:14:44.557371 | orchestrator | 2025-04-13 01:14:44.557376 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-04-13 01:14:44.557381 | orchestrator | Sunday 13 April 2025 01:14:37 +0000 (0:00:02.150) 0:08:10.268 ********** 2025-04-13 01:14:44.557387 | orchestrator | changed: [testbed-node-0] 2025-04-13 01:14:44.557392 | orchestrator | changed: [testbed-node-1] 2025-04-13 01:14:44.557397 | orchestrator | changed: [testbed-node-2] 2025-04-13 01:14:44.557402 | orchestrator | 2025-04-13 01:14:44.557407 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-04-13 01:14:44.557413 | orchestrator | 2025-04-13 01:14:44.557418 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-04-13 01:14:44.557423 | orchestrator | Sunday 13 April 2025 01:14:38 +0000 (0:00:00.992) 0:08:11.260 ********** 2025-04-13 01:14:44.557428 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.557436 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.557442 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.557447 | orchestrator | 2025-04-13 01:14:44.557452 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-04-13 01:14:44.557457 | orchestrator | 2025-04-13 01:14:44.557463 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-04-13 01:14:44.557468 | orchestrator | Sunday 13 April 2025 01:14:38 +0000 (0:00:00.755) 0:08:12.015 ********** 2025-04-13 01:14:44.557473 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-04-13 01:14:44.557478 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-04-13 01:14:44.557483 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-04-13 01:14:44.557488 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-04-13 01:14:44.557494 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-04-13 01:14:44.557499 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-04-13 01:14:44.557504 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-04-13 01:14:44.557509 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-04-13 01:14:44.557514 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-04-13 01:14:44.557520 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-04-13 01:14:44.557525 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-04-13 01:14:44.557530 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-04-13 01:14:44.557535 | orchestrator | skipping: [testbed-node-3] 2025-04-13 01:14:44.557540 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-04-13 01:14:44.557545 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-04-13 01:14:44.557551 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-04-13 01:14:44.557559 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-04-13 01:14:44.557564 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-04-13 01:14:44.557570 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-04-13 01:14:44.557575 | orchestrator | skipping: [testbed-node-4] 2025-04-13 01:14:44.557580 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-04-13 01:14:44.557585 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-04-13 01:14:44.557590 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-04-13 01:14:44.557596 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-04-13 01:14:44.557601 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-04-13 01:14:44.557606 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-04-13 01:14:44.557611 | orchestrator | skipping: [testbed-node-5] 2025-04-13 01:14:44.557616 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-04-13 01:14:44.557622 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-04-13 01:14:44.557627 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-04-13 01:14:44.557632 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-04-13 01:14:44.557637 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-04-13 01:14:44.557642 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-04-13 01:14:44.557647 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.557653 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.557658 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-04-13 01:14:44.557663 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-04-13 01:14:44.557668 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-04-13 01:14:44.557673 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-04-13 01:14:44.557682 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-04-13 01:14:44.557688 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-04-13 01:14:44.557693 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:44.557698 | orchestrator | 2025-04-13 01:14:44.557703 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-04-13 01:14:44.557708 | orchestrator | 2025-04-13 01:14:44.557714 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-04-13 01:14:44.557721 | orchestrator | Sunday 13 April 2025 01:14:40 +0000 (0:00:01.467) 0:08:13.483 ********** 2025-04-13 01:14:44.557727 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-04-13 01:14:44.557732 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-04-13 01:14:44.557737 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:44.557742 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-04-13 01:14:44.557748 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-04-13 01:14:44.557753 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:44.557761 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-04-13 01:14:47.579976 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-04-13 01:14:47.580177 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:47.580203 | orchestrator | 2025-04-13 01:14:47.580219 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-04-13 01:14:47.580234 | orchestrator | 2025-04-13 01:14:47.580249 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-04-13 01:14:47.580263 | orchestrator | Sunday 13 April 2025 01:14:41 +0000 (0:00:00.630) 0:08:14.113 ********** 2025-04-13 01:14:47.580277 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:47.580292 | orchestrator | 2025-04-13 01:14:47.580320 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-04-13 01:14:47.580346 | orchestrator | 2025-04-13 01:14:47.580360 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-04-13 01:14:47.580374 | orchestrator | Sunday 13 April 2025 01:14:42 +0000 (0:00:00.935) 0:08:15.049 ********** 2025-04-13 01:14:47.580388 | orchestrator | skipping: [testbed-node-0] 2025-04-13 01:14:47.580402 | orchestrator | skipping: [testbed-node-1] 2025-04-13 01:14:47.580416 | orchestrator | skipping: [testbed-node-2] 2025-04-13 01:14:47.580430 | orchestrator | 2025-04-13 01:14:47.580444 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-13 01:14:47.580458 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-13 01:14:47.580476 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-04-13 01:14:47.580490 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-04-13 01:14:47.580506 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-04-13 01:14:47.580522 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-04-13 01:14:47.580538 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-04-13 01:14:47.580554 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-04-13 01:14:47.580569 | orchestrator | 2025-04-13 01:14:47.580585 | orchestrator | 2025-04-13 01:14:47.580602 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-13 01:14:47.580648 | orchestrator | Sunday 13 April 2025 01:14:42 +0000 (0:00:00.629) 0:08:15.678 ********** 2025-04-13 01:14:47.580664 | orchestrator | =============================================================================== 2025-04-13 01:14:47.580680 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 27.73s 2025-04-13 01:14:47.580695 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 24.17s 2025-04-13 01:14:47.580711 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.00s 2025-04-13 01:14:47.580726 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 21.20s 2025-04-13 01:14:47.580742 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.02s 2025-04-13 01:14:47.580758 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 18.61s 2025-04-13 01:14:47.580773 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 16.36s 2025-04-13 01:14:47.580789 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.17s 2025-04-13 01:14:47.580804 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 15.79s 2025-04-13 01:14:47.580819 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.55s 2025-04-13 01:14:47.580835 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.37s 2025-04-13 01:14:47.580851 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.55s 2025-04-13 01:14:47.580864 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.06s 2025-04-13 01:14:47.580878 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.06s 2025-04-13 01:14:47.580892 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.65s 2025-04-13 01:14:47.580906 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.63s 2025-04-13 01:14:47.580920 | orchestrator | nova-cell : Copying files for nova-ssh --------------------------------- 10.44s 2025-04-13 01:14:47.580934 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.06s 2025-04-13 01:14:47.580948 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.30s 2025-04-13 01:14:47.580965 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 9.12s 2025-04-13 01:14:47.580991 | orchestrator | 2025-04-13 01:14:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:14:47.581038 | orchestrator | 2025-04-13 01:14:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:14:50.631846 | orchestrator | 2025-04-13 01:14:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:14:50.631980 | orchestrator | 2025-04-13 01:14:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:14:53.671322 | orchestrator | 2025-04-13 01:14:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:14:53.671460 | orchestrator | 2025-04-13 01:14:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:14:56.720617 | orchestrator | 2025-04-13 01:14:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:14:56.720757 | orchestrator | 2025-04-13 01:14:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:14:59.778187 | orchestrator | 2025-04-13 01:14:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:14:59.778337 | orchestrator | 2025-04-13 01:14:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:15:02.821593 | orchestrator | 2025-04-13 01:14:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:15:02.821734 | orchestrator | 2025-04-13 01:15:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:15:05.873882 | orchestrator | 2025-04-13 01:15:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:15:05.874157 | orchestrator | 2025-04-13 01:15:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:15:08.923526 | orchestrator | 2025-04-13 01:15:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:15:08.923667 | orchestrator | 2025-04-13 01:15:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:15:11.968266 | orchestrator | 2025-04-13 01:15:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:15:11.968440 | orchestrator | 2025-04-13 01:15:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:15:15.019981 | orchestrator | 2025-04-13 01:15:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:15:15.020150 | orchestrator | 2025-04-13 01:15:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:15:18.065831 | orchestrator | 2025-04-13 01:15:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:15:18.065988 | orchestrator | 2025-04-13 01:15:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:15:21.111704 | orchestrator | 2025-04-13 01:15:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:15:21.111869 | orchestrator | 2025-04-13 01:15:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:15:24.161541 | orchestrator | 2025-04-13 01:15:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:15:24.161698 | orchestrator | 2025-04-13 01:15:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:15:27.207045 | orchestrator | 2025-04-13 01:15:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:15:27.207256 | orchestrator | 2025-04-13 01:15:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:15:30.257172 | orchestrator | 2025-04-13 01:15:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:15:30.257324 | orchestrator | 2025-04-13 01:15:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:15:33.310562 | orchestrator | 2025-04-13 01:15:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:15:33.310704 | orchestrator | 2025-04-13 01:15:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:15:36.364342 | orchestrator | 2025-04-13 01:15:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:15:36.364495 | orchestrator | 2025-04-13 01:15:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:15:39.407472 | orchestrator | 2025-04-13 01:15:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:15:39.407608 | orchestrator | 2025-04-13 01:15:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:15:42.461459 | orchestrator | 2025-04-13 01:15:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:15:42.461641 | orchestrator | 2025-04-13 01:15:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:15:45.523623 | orchestrator | 2025-04-13 01:15:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:15:45.523795 | orchestrator | 2025-04-13 01:15:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:15:48.575621 | orchestrator | 2025-04-13 01:15:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:15:48.575747 | orchestrator | 2025-04-13 01:15:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:15:51.631404 | orchestrator | 2025-04-13 01:15:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:15:51.631577 | orchestrator | 2025-04-13 01:15:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:15:54.683787 | orchestrator | 2025-04-13 01:15:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:15:54.683920 | orchestrator | 2025-04-13 01:15:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:15:57.730215 | orchestrator | 2025-04-13 01:15:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:15:57.730352 | orchestrator | 2025-04-13 01:15:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:16:00.775722 | orchestrator | 2025-04-13 01:15:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:16:00.775877 | orchestrator | 2025-04-13 01:16:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:16:03.821617 | orchestrator | 2025-04-13 01:16:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:16:03.821717 | orchestrator | 2025-04-13 01:16:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:16:06.872976 | orchestrator | 2025-04-13 01:16:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:16:06.873160 | orchestrator | 2025-04-13 01:16:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:16:09.916361 | orchestrator | 2025-04-13 01:16:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:16:09.916503 | orchestrator | 2025-04-13 01:16:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:16:12.956229 | orchestrator | 2025-04-13 01:16:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:16:12.956380 | orchestrator | 2025-04-13 01:16:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:16:15.997797 | orchestrator | 2025-04-13 01:16:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:16:15.997937 | orchestrator | 2025-04-13 01:16:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:16:19.050170 | orchestrator | 2025-04-13 01:16:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:16:19.050326 | orchestrator | 2025-04-13 01:16:19 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:16:22.096793 | orchestrator | 2025-04-13 01:16:19 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:16:22.096974 | orchestrator | 2025-04-13 01:16:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:16:25.141684 | orchestrator | 2025-04-13 01:16:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:16:25.141834 | orchestrator | 2025-04-13 01:16:25 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:16:28.186678 | orchestrator | 2025-04-13 01:16:25 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:16:28.186819 | orchestrator | 2025-04-13 01:16:28 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:16:31.238851 | orchestrator | 2025-04-13 01:16:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:16:31.239005 | orchestrator | 2025-04-13 01:16:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:16:34.289587 | orchestrator | 2025-04-13 01:16:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:16:34.289682 | orchestrator | 2025-04-13 01:16:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:16:37.337873 | orchestrator | 2025-04-13 01:16:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:16:37.338146 | orchestrator | 2025-04-13 01:16:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:16:40.382673 | orchestrator | 2025-04-13 01:16:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:16:40.382784 | orchestrator | 2025-04-13 01:16:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:16:43.422239 | orchestrator | 2025-04-13 01:16:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:16:43.422376 | orchestrator | 2025-04-13 01:16:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:16:46.469987 | orchestrator | 2025-04-13 01:16:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:16:46.470145 | orchestrator | 2025-04-13 01:16:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:16:49.514202 | orchestrator | 2025-04-13 01:16:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:16:49.514359 | orchestrator | 2025-04-13 01:16:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:16:52.568456 | orchestrator | 2025-04-13 01:16:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:16:52.568593 | orchestrator | 2025-04-13 01:16:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:16:55.619581 | orchestrator | 2025-04-13 01:16:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:16:55.619775 | orchestrator | 2025-04-13 01:16:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:16:58.669079 | orchestrator | 2025-04-13 01:16:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:16:58.669304 | orchestrator | 2025-04-13 01:16:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:17:01.720569 | orchestrator | 2025-04-13 01:16:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:17:01.720747 | orchestrator | 2025-04-13 01:17:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:17:04.767382 | orchestrator | 2025-04-13 01:17:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:17:04.767579 | orchestrator | 2025-04-13 01:17:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:17:07.819158 | orchestrator | 2025-04-13 01:17:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:17:07.819316 | orchestrator | 2025-04-13 01:17:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:17:10.881478 | orchestrator | 2025-04-13 01:17:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:17:10.881622 | orchestrator | 2025-04-13 01:17:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:17:13.929835 | orchestrator | 2025-04-13 01:17:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:17:13.929991 | orchestrator | 2025-04-13 01:17:13 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:17:16.987548 | orchestrator | 2025-04-13 01:17:13 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:17:16.987720 | orchestrator | 2025-04-13 01:17:16 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:17:20.037566 | orchestrator | 2025-04-13 01:17:16 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:17:20.037724 | orchestrator | 2025-04-13 01:17:20 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:17:23.092508 | orchestrator | 2025-04-13 01:17:20 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:17:23.092684 | orchestrator | 2025-04-13 01:17:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:17:26.146750 | orchestrator | 2025-04-13 01:17:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:17:26.146897 | orchestrator | 2025-04-13 01:17:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:17:29.198675 | orchestrator | 2025-04-13 01:17:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:17:29.198845 | orchestrator | 2025-04-13 01:17:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:17:32.246381 | orchestrator | 2025-04-13 01:17:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:17:32.246561 | orchestrator | 2025-04-13 01:17:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:17:35.301490 | orchestrator | 2025-04-13 01:17:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:17:35.301671 | orchestrator | 2025-04-13 01:17:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:17:38.360675 | orchestrator | 2025-04-13 01:17:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:17:38.360826 | orchestrator | 2025-04-13 01:17:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:17:41.411809 | orchestrator | 2025-04-13 01:17:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:17:41.411954 | orchestrator | 2025-04-13 01:17:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:17:44.458087 | orchestrator | 2025-04-13 01:17:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:17:44.458310 | orchestrator | 2025-04-13 01:17:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:17:47.505590 | orchestrator | 2025-04-13 01:17:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:17:47.505733 | orchestrator | 2025-04-13 01:17:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:17:50.559562 | orchestrator | 2025-04-13 01:17:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:17:50.559731 | orchestrator | 2025-04-13 01:17:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:17:53.610286 | orchestrator | 2025-04-13 01:17:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:17:53.610430 | orchestrator | 2025-04-13 01:17:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:17:56.662481 | orchestrator | 2025-04-13 01:17:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:17:56.662616 | orchestrator | 2025-04-13 01:17:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:17:59.714467 | orchestrator | 2025-04-13 01:17:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:17:59.714643 | orchestrator | 2025-04-13 01:17:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:18:02.766998 | orchestrator | 2025-04-13 01:17:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:18:02.767182 | orchestrator | 2025-04-13 01:18:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:18:05.823810 | orchestrator | 2025-04-13 01:18:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:18:05.823951 | orchestrator | 2025-04-13 01:18:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:18:08.867288 | orchestrator | 2025-04-13 01:18:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:18:08.867428 | orchestrator | 2025-04-13 01:18:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:18:11.909221 | orchestrator | 2025-04-13 01:18:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:18:11.909377 | orchestrator | 2025-04-13 01:18:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:18:14.961954 | orchestrator | 2025-04-13 01:18:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:18:14.962196 | orchestrator | 2025-04-13 01:18:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:18:18.018338 | orchestrator | 2025-04-13 01:18:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:18:18.018479 | orchestrator | 2025-04-13 01:18:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:18:21.068670 | orchestrator | 2025-04-13 01:18:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:18:21.068846 | orchestrator | 2025-04-13 01:18:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:18:24.115827 | orchestrator | 2025-04-13 01:18:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:18:24.115971 | orchestrator | 2025-04-13 01:18:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:18:27.162564 | orchestrator | 2025-04-13 01:18:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:18:27.162713 | orchestrator | 2025-04-13 01:18:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:18:30.208323 | orchestrator | 2025-04-13 01:18:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:18:30.208507 | orchestrator | 2025-04-13 01:18:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:18:33.259496 | orchestrator | 2025-04-13 01:18:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:18:33.259654 | orchestrator | 2025-04-13 01:18:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:18:36.306380 | orchestrator | 2025-04-13 01:18:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:18:36.306522 | orchestrator | 2025-04-13 01:18:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:18:39.360006 | orchestrator | 2025-04-13 01:18:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:18:39.360186 | orchestrator | 2025-04-13 01:18:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:18:42.405611 | orchestrator | 2025-04-13 01:18:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:18:42.405731 | orchestrator | 2025-04-13 01:18:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:18:45.464649 | orchestrator | 2025-04-13 01:18:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:18:45.464794 | orchestrator | 2025-04-13 01:18:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:18:48.510363 | orchestrator | 2025-04-13 01:18:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:18:48.510504 | orchestrator | 2025-04-13 01:18:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:18:51.553795 | orchestrator | 2025-04-13 01:18:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:18:51.553922 | orchestrator | 2025-04-13 01:18:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:18:54.596568 | orchestrator | 2025-04-13 01:18:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:18:54.596709 | orchestrator | 2025-04-13 01:18:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:18:57.643501 | orchestrator | 2025-04-13 01:18:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:18:57.643669 | orchestrator | 2025-04-13 01:18:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:19:00.692300 | orchestrator | 2025-04-13 01:18:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:19:00.692476 | orchestrator | 2025-04-13 01:19:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:19:03.746929 | orchestrator | 2025-04-13 01:19:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:19:03.747032 | orchestrator | 2025-04-13 01:19:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:19:06.788994 | orchestrator | 2025-04-13 01:19:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:19:06.789162 | orchestrator | 2025-04-13 01:19:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:19:09.837587 | orchestrator | 2025-04-13 01:19:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:19:09.837733 | orchestrator | 2025-04-13 01:19:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:19:12.889399 | orchestrator | 2025-04-13 01:19:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:19:12.889543 | orchestrator | 2025-04-13 01:19:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:19:15.942326 | orchestrator | 2025-04-13 01:19:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:19:15.942461 | orchestrator | 2025-04-13 01:19:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:19:18.994382 | orchestrator | 2025-04-13 01:19:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:19:18.994507 | orchestrator | 2025-04-13 01:19:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:19:22.041050 | orchestrator | 2025-04-13 01:19:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:19:22.041235 | orchestrator | 2025-04-13 01:19:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:19:25.080239 | orchestrator | 2025-04-13 01:19:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:19:25.080377 | orchestrator | 2025-04-13 01:19:25 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:19:28.135175 | orchestrator | 2025-04-13 01:19:25 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:19:28.135329 | orchestrator | 2025-04-13 01:19:28 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:19:31.188413 | orchestrator | 2025-04-13 01:19:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:19:31.188548 | orchestrator | 2025-04-13 01:19:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:19:34.237718 | orchestrator | 2025-04-13 01:19:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:19:34.237877 | orchestrator | 2025-04-13 01:19:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:19:37.293527 | orchestrator | 2025-04-13 01:19:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:19:37.293679 | orchestrator | 2025-04-13 01:19:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:19:40.337321 | orchestrator | 2025-04-13 01:19:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:19:40.337467 | orchestrator | 2025-04-13 01:19:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:19:43.389927 | orchestrator | 2025-04-13 01:19:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:19:43.390172 | orchestrator | 2025-04-13 01:19:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:19:46.440507 | orchestrator | 2025-04-13 01:19:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:19:46.440654 | orchestrator | 2025-04-13 01:19:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:19:49.492454 | orchestrator | 2025-04-13 01:19:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:19:49.492586 | orchestrator | 2025-04-13 01:19:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:19:52.540020 | orchestrator | 2025-04-13 01:19:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:19:52.540190 | orchestrator | 2025-04-13 01:19:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:19:55.595573 | orchestrator | 2025-04-13 01:19:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:19:55.595711 | orchestrator | 2025-04-13 01:19:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:19:58.644448 | orchestrator | 2025-04-13 01:19:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:19:58.644601 | orchestrator | 2025-04-13 01:19:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:20:01.700926 | orchestrator | 2025-04-13 01:19:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:20:01.701098 | orchestrator | 2025-04-13 01:20:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:20:04.751318 | orchestrator | 2025-04-13 01:20:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:20:04.751463 | orchestrator | 2025-04-13 01:20:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:20:07.797907 | orchestrator | 2025-04-13 01:20:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:20:07.798075 | orchestrator | 2025-04-13 01:20:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:20:10.848860 | orchestrator | 2025-04-13 01:20:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:20:10.849020 | orchestrator | 2025-04-13 01:20:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:20:13.902568 | orchestrator | 2025-04-13 01:20:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:20:13.902716 | orchestrator | 2025-04-13 01:20:13 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:20:16.951825 | orchestrator | 2025-04-13 01:20:13 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:20:16.951975 | orchestrator | 2025-04-13 01:20:16 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:20:19.997920 | orchestrator | 2025-04-13 01:20:16 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:20:19.998164 | orchestrator | 2025-04-13 01:20:19 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:20:23.070526 | orchestrator | 2025-04-13 01:20:19 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:20:23.070681 | orchestrator | 2025-04-13 01:20:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:20:26.120843 | orchestrator | 2025-04-13 01:20:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:20:26.121019 | orchestrator | 2025-04-13 01:20:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:20:29.169889 | orchestrator | 2025-04-13 01:20:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:20:29.170097 | orchestrator | 2025-04-13 01:20:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:20:32.231608 | orchestrator | 2025-04-13 01:20:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:20:32.231745 | orchestrator | 2025-04-13 01:20:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:20:35.284976 | orchestrator | 2025-04-13 01:20:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:20:35.285128 | orchestrator | 2025-04-13 01:20:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:20:38.336406 | orchestrator | 2025-04-13 01:20:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:20:38.336549 | orchestrator | 2025-04-13 01:20:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:20:41.382111 | orchestrator | 2025-04-13 01:20:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:20:41.382317 | orchestrator | 2025-04-13 01:20:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:20:44.439776 | orchestrator | 2025-04-13 01:20:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:20:44.439918 | orchestrator | 2025-04-13 01:20:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:20:47.489942 | orchestrator | 2025-04-13 01:20:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:20:47.490144 | orchestrator | 2025-04-13 01:20:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:20:50.542113 | orchestrator | 2025-04-13 01:20:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:20:50.542298 | orchestrator | 2025-04-13 01:20:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:20:53.588210 | orchestrator | 2025-04-13 01:20:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:20:53.588338 | orchestrator | 2025-04-13 01:20:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:20:56.637070 | orchestrator | 2025-04-13 01:20:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:20:56.637262 | orchestrator | 2025-04-13 01:20:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:20:59.688473 | orchestrator | 2025-04-13 01:20:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:20:59.688592 | orchestrator | 2025-04-13 01:20:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:21:02.745673 | orchestrator | 2025-04-13 01:20:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:21:02.745844 | orchestrator | 2025-04-13 01:21:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:21:05.799068 | orchestrator | 2025-04-13 01:21:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:21:05.799236 | orchestrator | 2025-04-13 01:21:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:21:08.847597 | orchestrator | 2025-04-13 01:21:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:21:08.847735 | orchestrator | 2025-04-13 01:21:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:21:11.896566 | orchestrator | 2025-04-13 01:21:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:21:11.896714 | orchestrator | 2025-04-13 01:21:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:21:14.949150 | orchestrator | 2025-04-13 01:21:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:21:14.949329 | orchestrator | 2025-04-13 01:21:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:21:17.998241 | orchestrator | 2025-04-13 01:21:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:21:17.998731 | orchestrator | 2025-04-13 01:21:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:21:21.046473 | orchestrator | 2025-04-13 01:21:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:21:21.046611 | orchestrator | 2025-04-13 01:21:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:21:24.096426 | orchestrator | 2025-04-13 01:21:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:21:24.096566 | orchestrator | 2025-04-13 01:21:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:21:27.139462 | orchestrator | 2025-04-13 01:21:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:21:27.139601 | orchestrator | 2025-04-13 01:21:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:21:30.188584 | orchestrator | 2025-04-13 01:21:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:21:30.188760 | orchestrator | 2025-04-13 01:21:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:21:33.238595 | orchestrator | 2025-04-13 01:21:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:21:33.238738 | orchestrator | 2025-04-13 01:21:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:21:36.288522 | orchestrator | 2025-04-13 01:21:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:21:36.288676 | orchestrator | 2025-04-13 01:21:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:21:39.331987 | orchestrator | 2025-04-13 01:21:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:21:39.332127 | orchestrator | 2025-04-13 01:21:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:21:42.375283 | orchestrator | 2025-04-13 01:21:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:21:42.375441 | orchestrator | 2025-04-13 01:21:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:21:45.433276 | orchestrator | 2025-04-13 01:21:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:21:45.433419 | orchestrator | 2025-04-13 01:21:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:21:48.482686 | orchestrator | 2025-04-13 01:21:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:21:48.482839 | orchestrator | 2025-04-13 01:21:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:21:51.528208 | orchestrator | 2025-04-13 01:21:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:21:51.528342 | orchestrator | 2025-04-13 01:21:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:21:54.575114 | orchestrator | 2025-04-13 01:21:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:21:54.575288 | orchestrator | 2025-04-13 01:21:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:21:57.627030 | orchestrator | 2025-04-13 01:21:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:21:57.627161 | orchestrator | 2025-04-13 01:21:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:22:00.679788 | orchestrator | 2025-04-13 01:21:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:22:00.679896 | orchestrator | 2025-04-13 01:22:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:22:03.728637 | orchestrator | 2025-04-13 01:22:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:22:03.728778 | orchestrator | 2025-04-13 01:22:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:22:06.775888 | orchestrator | 2025-04-13 01:22:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:22:06.776042 | orchestrator | 2025-04-13 01:22:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:22:09.828141 | orchestrator | 2025-04-13 01:22:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:22:09.828309 | orchestrator | 2025-04-13 01:22:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:22:12.871824 | orchestrator | 2025-04-13 01:22:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:22:12.871977 | orchestrator | 2025-04-13 01:22:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:22:15.921659 | orchestrator | 2025-04-13 01:22:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:22:15.921770 | orchestrator | 2025-04-13 01:22:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:22:18.972916 | orchestrator | 2025-04-13 01:22:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:22:18.973068 | orchestrator | 2025-04-13 01:22:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:22:22.030518 | orchestrator | 2025-04-13 01:22:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:22:22.030763 | orchestrator | 2025-04-13 01:22:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:22:25.083440 | orchestrator | 2025-04-13 01:22:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:22:25.083588 | orchestrator | 2025-04-13 01:22:25 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:22:28.130987 | orchestrator | 2025-04-13 01:22:25 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:22:28.131133 | orchestrator | 2025-04-13 01:22:28 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:22:31.177769 | orchestrator | 2025-04-13 01:22:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:22:31.177919 | orchestrator | 2025-04-13 01:22:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:22:34.231439 | orchestrator | 2025-04-13 01:22:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:22:34.231578 | orchestrator | 2025-04-13 01:22:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:22:37.287558 | orchestrator | 2025-04-13 01:22:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:22:37.287711 | orchestrator | 2025-04-13 01:22:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:22:40.330657 | orchestrator | 2025-04-13 01:22:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:22:40.330804 | orchestrator | 2025-04-13 01:22:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:22:43.385948 | orchestrator | 2025-04-13 01:22:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:22:43.386210 | orchestrator | 2025-04-13 01:22:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:22:46.436351 | orchestrator | 2025-04-13 01:22:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:22:46.436487 | orchestrator | 2025-04-13 01:22:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:22:49.486531 | orchestrator | 2025-04-13 01:22:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:22:49.486676 | orchestrator | 2025-04-13 01:22:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:22:52.550213 | orchestrator | 2025-04-13 01:22:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:22:52.550358 | orchestrator | 2025-04-13 01:22:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:22:55.601861 | orchestrator | 2025-04-13 01:22:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:22:55.602000 | orchestrator | 2025-04-13 01:22:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:22:58.653277 | orchestrator | 2025-04-13 01:22:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:22:58.653375 | orchestrator | 2025-04-13 01:22:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:23:01.702860 | orchestrator | 2025-04-13 01:22:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:23:01.703014 | orchestrator | 2025-04-13 01:23:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:23:04.752846 | orchestrator | 2025-04-13 01:23:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:23:04.752994 | orchestrator | 2025-04-13 01:23:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:23:07.802349 | orchestrator | 2025-04-13 01:23:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:23:07.802501 | orchestrator | 2025-04-13 01:23:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:23:10.857287 | orchestrator | 2025-04-13 01:23:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:23:10.857385 | orchestrator | 2025-04-13 01:23:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:23:13.904361 | orchestrator | 2025-04-13 01:23:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:23:13.904514 | orchestrator | 2025-04-13 01:23:13 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:23:16.952491 | orchestrator | 2025-04-13 01:23:13 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:23:16.952628 | orchestrator | 2025-04-13 01:23:16 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:23:20.007721 | orchestrator | 2025-04-13 01:23:16 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:23:20.007867 | orchestrator | 2025-04-13 01:23:20 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:23:23.048360 | orchestrator | 2025-04-13 01:23:20 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:23:23.048500 | orchestrator | 2025-04-13 01:23:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:23:23.049387 | orchestrator | 2025-04-13 01:23:23 | INFO  | Task 7775a02b-d235-4c29-b051-5d63e6e5667c is in state STARTED 2025-04-13 01:23:26.094827 | orchestrator | 2025-04-13 01:23:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:23:26.095008 | orchestrator | 2025-04-13 01:23:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:23:26.095621 | orchestrator | 2025-04-13 01:23:26 | INFO  | Task 7775a02b-d235-4c29-b051-5d63e6e5667c is in state STARTED 2025-04-13 01:23:29.156583 | orchestrator | 2025-04-13 01:23:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:23:29.156729 | orchestrator | 2025-04-13 01:23:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:23:29.157985 | orchestrator | 2025-04-13 01:23:29 | INFO  | Task 7775a02b-d235-4c29-b051-5d63e6e5667c is in state STARTED 2025-04-13 01:23:32.215047 | orchestrator | 2025-04-13 01:23:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:23:32.215274 | orchestrator | 2025-04-13 01:23:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:23:32.216415 | orchestrator | 2025-04-13 01:23:32 | INFO  | Task 7775a02b-d235-4c29-b051-5d63e6e5667c is in state STARTED 2025-04-13 01:23:32.216550 | orchestrator | 2025-04-13 01:23:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:23:35.266406 | orchestrator | 2025-04-13 01:23:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:23:35.267351 | orchestrator | 2025-04-13 01:23:35 | INFO  | Task 7775a02b-d235-4c29-b051-5d63e6e5667c is in state SUCCESS 2025-04-13 01:23:38.320346 | orchestrator | 2025-04-13 01:23:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:23:38.320448 | orchestrator | 2025-04-13 01:23:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:23:41.374569 | orchestrator | 2025-04-13 01:23:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:23:41.374712 | orchestrator | 2025-04-13 01:23:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:23:44.426948 | orchestrator | 2025-04-13 01:23:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:23:44.427053 | orchestrator | 2025-04-13 01:23:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:23:47.472186 | orchestrator | 2025-04-13 01:23:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:23:47.472358 | orchestrator | 2025-04-13 01:23:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:23:50.525164 | orchestrator | 2025-04-13 01:23:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:23:50.525464 | orchestrator | 2025-04-13 01:23:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:23:53.577399 | orchestrator | 2025-04-13 01:23:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:23:53.577541 | orchestrator | 2025-04-13 01:23:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:23:56.630478 | orchestrator | 2025-04-13 01:23:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:23:56.630630 | orchestrator | 2025-04-13 01:23:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:23:59.679758 | orchestrator | 2025-04-13 01:23:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:23:59.679903 | orchestrator | 2025-04-13 01:23:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:24:02.728527 | orchestrator | 2025-04-13 01:23:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:24:02.728676 | orchestrator | 2025-04-13 01:24:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:24:05.777283 | orchestrator | 2025-04-13 01:24:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:24:05.777418 | orchestrator | 2025-04-13 01:24:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:24:08.827728 | orchestrator | 2025-04-13 01:24:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:24:08.827838 | orchestrator | 2025-04-13 01:24:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:24:11.876517 | orchestrator | 2025-04-13 01:24:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:24:11.876665 | orchestrator | 2025-04-13 01:24:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:24:14.925316 | orchestrator | 2025-04-13 01:24:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:24:14.925447 | orchestrator | 2025-04-13 01:24:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:24:17.976594 | orchestrator | 2025-04-13 01:24:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:24:17.976741 | orchestrator | 2025-04-13 01:24:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:24:21.023480 | orchestrator | 2025-04-13 01:24:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:24:21.023629 | orchestrator | 2025-04-13 01:24:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:24:24.061792 | orchestrator | 2025-04-13 01:24:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:24:24.061904 | orchestrator | 2025-04-13 01:24:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:24:27.108394 | orchestrator | 2025-04-13 01:24:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:24:27.108546 | orchestrator | 2025-04-13 01:24:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:24:30.163833 | orchestrator | 2025-04-13 01:24:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:24:30.163977 | orchestrator | 2025-04-13 01:24:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:24:33.217496 | orchestrator | 2025-04-13 01:24:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:24:33.217663 | orchestrator | 2025-04-13 01:24:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:24:36.272286 | orchestrator | 2025-04-13 01:24:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:24:36.272428 | orchestrator | 2025-04-13 01:24:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:24:39.322593 | orchestrator | 2025-04-13 01:24:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:24:39.322749 | orchestrator | 2025-04-13 01:24:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:24:42.370732 | orchestrator | 2025-04-13 01:24:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:24:42.370884 | orchestrator | 2025-04-13 01:24:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:24:45.438786 | orchestrator | 2025-04-13 01:24:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:24:45.438936 | orchestrator | 2025-04-13 01:24:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:24:48.486427 | orchestrator | 2025-04-13 01:24:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:24:48.486559 | orchestrator | 2025-04-13 01:24:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:24:51.537610 | orchestrator | 2025-04-13 01:24:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:24:51.537759 | orchestrator | 2025-04-13 01:24:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:24:54.583542 | orchestrator | 2025-04-13 01:24:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:24:54.583722 | orchestrator | 2025-04-13 01:24:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:24:57.639128 | orchestrator | 2025-04-13 01:24:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:24:57.639334 | orchestrator | 2025-04-13 01:24:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:25:00.693423 | orchestrator | 2025-04-13 01:24:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:25:00.693604 | orchestrator | 2025-04-13 01:25:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:25:03.741885 | orchestrator | 2025-04-13 01:25:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:25:03.742061 | orchestrator | 2025-04-13 01:25:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:25:06.790179 | orchestrator | 2025-04-13 01:25:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:25:06.790380 | orchestrator | 2025-04-13 01:25:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:25:09.843278 | orchestrator | 2025-04-13 01:25:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:25:09.843439 | orchestrator | 2025-04-13 01:25:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:25:12.889037 | orchestrator | 2025-04-13 01:25:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:25:12.889180 | orchestrator | 2025-04-13 01:25:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:25:15.948865 | orchestrator | 2025-04-13 01:25:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:25:15.948999 | orchestrator | 2025-04-13 01:25:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:25:18.997299 | orchestrator | 2025-04-13 01:25:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:25:18.997446 | orchestrator | 2025-04-13 01:25:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:25:22.049912 | orchestrator | 2025-04-13 01:25:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:25:22.050111 | orchestrator | 2025-04-13 01:25:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:25:25.097887 | orchestrator | 2025-04-13 01:25:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:25:25.098088 | orchestrator | 2025-04-13 01:25:25 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:25:28.149451 | orchestrator | 2025-04-13 01:25:25 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:25:28.149621 | orchestrator | 2025-04-13 01:25:28 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:25:31.209004 | orchestrator | 2025-04-13 01:25:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:25:31.209140 | orchestrator | 2025-04-13 01:25:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:25:34.259528 | orchestrator | 2025-04-13 01:25:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:25:34.259686 | orchestrator | 2025-04-13 01:25:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:25:37.313592 | orchestrator | 2025-04-13 01:25:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:25:37.313734 | orchestrator | 2025-04-13 01:25:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:25:40.356303 | orchestrator | 2025-04-13 01:25:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:25:40.356444 | orchestrator | 2025-04-13 01:25:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:25:43.396346 | orchestrator | 2025-04-13 01:25:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:25:43.396486 | orchestrator | 2025-04-13 01:25:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:25:46.444490 | orchestrator | 2025-04-13 01:25:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:25:46.444641 | orchestrator | 2025-04-13 01:25:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:25:49.489399 | orchestrator | 2025-04-13 01:25:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:25:49.489539 | orchestrator | 2025-04-13 01:25:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:25:52.540926 | orchestrator | 2025-04-13 01:25:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:25:52.541103 | orchestrator | 2025-04-13 01:25:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:25:55.592949 | orchestrator | 2025-04-13 01:25:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:25:55.593090 | orchestrator | 2025-04-13 01:25:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:25:58.642993 | orchestrator | 2025-04-13 01:25:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:25:58.643177 | orchestrator | 2025-04-13 01:25:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:26:01.695012 | orchestrator | 2025-04-13 01:25:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:26:01.695150 | orchestrator | 2025-04-13 01:26:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:26:04.734786 | orchestrator | 2025-04-13 01:26:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:26:04.734982 | orchestrator | 2025-04-13 01:26:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:26:07.787472 | orchestrator | 2025-04-13 01:26:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:26:07.787566 | orchestrator | 2025-04-13 01:26:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:26:10.836286 | orchestrator | 2025-04-13 01:26:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:26:10.836442 | orchestrator | 2025-04-13 01:26:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:26:13.892959 | orchestrator | 2025-04-13 01:26:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:26:13.893101 | orchestrator | 2025-04-13 01:26:13 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:26:16.952557 | orchestrator | 2025-04-13 01:26:13 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:26:16.952696 | orchestrator | 2025-04-13 01:26:16 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:26:20.000573 | orchestrator | 2025-04-13 01:26:16 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:26:20.000711 | orchestrator | 2025-04-13 01:26:19 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:26:23.055864 | orchestrator | 2025-04-13 01:26:19 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:26:23.056014 | orchestrator | 2025-04-13 01:26:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:26:26.102222 | orchestrator | 2025-04-13 01:26:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:26:26.102401 | orchestrator | 2025-04-13 01:26:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:26:29.146760 | orchestrator | 2025-04-13 01:26:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:26:29.146909 | orchestrator | 2025-04-13 01:26:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:26:32.191447 | orchestrator | 2025-04-13 01:26:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:26:32.191572 | orchestrator | 2025-04-13 01:26:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:26:35.240356 | orchestrator | 2025-04-13 01:26:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:26:35.240506 | orchestrator | 2025-04-13 01:26:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:26:38.289655 | orchestrator | 2025-04-13 01:26:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:26:38.289779 | orchestrator | 2025-04-13 01:26:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:26:41.332211 | orchestrator | 2025-04-13 01:26:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:26:41.332382 | orchestrator | 2025-04-13 01:26:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:26:44.381400 | orchestrator | 2025-04-13 01:26:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:26:44.381546 | orchestrator | 2025-04-13 01:26:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:26:47.434860 | orchestrator | 2025-04-13 01:26:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:26:47.435010 | orchestrator | 2025-04-13 01:26:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:26:50.495159 | orchestrator | 2025-04-13 01:26:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:26:50.495366 | orchestrator | 2025-04-13 01:26:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:26:53.554449 | orchestrator | 2025-04-13 01:26:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:26:53.554604 | orchestrator | 2025-04-13 01:26:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:26:56.603029 | orchestrator | 2025-04-13 01:26:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:26:56.603166 | orchestrator | 2025-04-13 01:26:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:26:59.656387 | orchestrator | 2025-04-13 01:26:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:26:59.656528 | orchestrator | 2025-04-13 01:26:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:27:02.714359 | orchestrator | 2025-04-13 01:26:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:27:02.714528 | orchestrator | 2025-04-13 01:27:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:27:05.764909 | orchestrator | 2025-04-13 01:27:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:27:05.765059 | orchestrator | 2025-04-13 01:27:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:27:08.819898 | orchestrator | 2025-04-13 01:27:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:27:08.820038 | orchestrator | 2025-04-13 01:27:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:27:11.869693 | orchestrator | 2025-04-13 01:27:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:27:11.869916 | orchestrator | 2025-04-13 01:27:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:27:14.916924 | orchestrator | 2025-04-13 01:27:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:27:14.917018 | orchestrator | 2025-04-13 01:27:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:27:17.967736 | orchestrator | 2025-04-13 01:27:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:27:17.967884 | orchestrator | 2025-04-13 01:27:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:27:21.018377 | orchestrator | 2025-04-13 01:27:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:27:21.018522 | orchestrator | 2025-04-13 01:27:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:27:24.058766 | orchestrator | 2025-04-13 01:27:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:27:24.058885 | orchestrator | 2025-04-13 01:27:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:27:27.114666 | orchestrator | 2025-04-13 01:27:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:27:27.114804 | orchestrator | 2025-04-13 01:27:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:27:30.169669 | orchestrator | 2025-04-13 01:27:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:27:30.169820 | orchestrator | 2025-04-13 01:27:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:27:33.216022 | orchestrator | 2025-04-13 01:27:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:27:33.216158 | orchestrator | 2025-04-13 01:27:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:27:36.270765 | orchestrator | 2025-04-13 01:27:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:27:36.270926 | orchestrator | 2025-04-13 01:27:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:27:39.321554 | orchestrator | 2025-04-13 01:27:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:27:39.321695 | orchestrator | 2025-04-13 01:27:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:27:42.366662 | orchestrator | 2025-04-13 01:27:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:27:42.366816 | orchestrator | 2025-04-13 01:27:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:27:45.421997 | orchestrator | 2025-04-13 01:27:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:27:45.422227 | orchestrator | 2025-04-13 01:27:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:27:48.474346 | orchestrator | 2025-04-13 01:27:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:27:48.474502 | orchestrator | 2025-04-13 01:27:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:27:51.522770 | orchestrator | 2025-04-13 01:27:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:27:51.522916 | orchestrator | 2025-04-13 01:27:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:27:54.571212 | orchestrator | 2025-04-13 01:27:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:27:54.571433 | orchestrator | 2025-04-13 01:27:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:27:57.622968 | orchestrator | 2025-04-13 01:27:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:27:57.623114 | orchestrator | 2025-04-13 01:27:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:28:00.670576 | orchestrator | 2025-04-13 01:27:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:28:00.670750 | orchestrator | 2025-04-13 01:28:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:28:03.715402 | orchestrator | 2025-04-13 01:28:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:28:03.715547 | orchestrator | 2025-04-13 01:28:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:28:06.763740 | orchestrator | 2025-04-13 01:28:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:28:06.763898 | orchestrator | 2025-04-13 01:28:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:28:09.813154 | orchestrator | 2025-04-13 01:28:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:28:09.813242 | orchestrator | 2025-04-13 01:28:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:28:12.865834 | orchestrator | 2025-04-13 01:28:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:28:12.865989 | orchestrator | 2025-04-13 01:28:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:28:15.904094 | orchestrator | 2025-04-13 01:28:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:28:15.904240 | orchestrator | 2025-04-13 01:28:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:28:18.950756 | orchestrator | 2025-04-13 01:28:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:28:18.950906 | orchestrator | 2025-04-13 01:28:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:28:22.004633 | orchestrator | 2025-04-13 01:28:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:28:22.004774 | orchestrator | 2025-04-13 01:28:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:28:25.052218 | orchestrator | 2025-04-13 01:28:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:28:25.052476 | orchestrator | 2025-04-13 01:28:25 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:28:28.096590 | orchestrator | 2025-04-13 01:28:25 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:28:28.096744 | orchestrator | 2025-04-13 01:28:28 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:28:31.138485 | orchestrator | 2025-04-13 01:28:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:28:31.138599 | orchestrator | 2025-04-13 01:28:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:28:31.139243 | orchestrator | 2025-04-13 01:28:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:28:34.190241 | orchestrator | 2025-04-13 01:28:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:28:37.242533 | orchestrator | 2025-04-13 01:28:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:28:37.242686 | orchestrator | 2025-04-13 01:28:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:28:40.289092 | orchestrator | 2025-04-13 01:28:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:28:40.289233 | orchestrator | 2025-04-13 01:28:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:28:43.344381 | orchestrator | 2025-04-13 01:28:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:28:43.344522 | orchestrator | 2025-04-13 01:28:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:28:46.406605 | orchestrator | 2025-04-13 01:28:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:28:46.406744 | orchestrator | 2025-04-13 01:28:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:28:49.460617 | orchestrator | 2025-04-13 01:28:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:28:49.460772 | orchestrator | 2025-04-13 01:28:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:28:52.512728 | orchestrator | 2025-04-13 01:28:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:28:52.512867 | orchestrator | 2025-04-13 01:28:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:28:55.567863 | orchestrator | 2025-04-13 01:28:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:28:55.568017 | orchestrator | 2025-04-13 01:28:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:28:58.613043 | orchestrator | 2025-04-13 01:28:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:28:58.613186 | orchestrator | 2025-04-13 01:28:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:29:01.660642 | orchestrator | 2025-04-13 01:28:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:29:01.660799 | orchestrator | 2025-04-13 01:29:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:29:04.708476 | orchestrator | 2025-04-13 01:29:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:29:04.708610 | orchestrator | 2025-04-13 01:29:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:29:07.760664 | orchestrator | 2025-04-13 01:29:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:29:07.760773 | orchestrator | 2025-04-13 01:29:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:29:10.810083 | orchestrator | 2025-04-13 01:29:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:29:10.810227 | orchestrator | 2025-04-13 01:29:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:29:13.861448 | orchestrator | 2025-04-13 01:29:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:29:13.861628 | orchestrator | 2025-04-13 01:29:13 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:29:16.907782 | orchestrator | 2025-04-13 01:29:13 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:29:16.907923 | orchestrator | 2025-04-13 01:29:16 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:29:19.955819 | orchestrator | 2025-04-13 01:29:16 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:29:19.955970 | orchestrator | 2025-04-13 01:29:19 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:29:23.003655 | orchestrator | 2025-04-13 01:29:19 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:29:23.003793 | orchestrator | 2025-04-13 01:29:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:29:26.057963 | orchestrator | 2025-04-13 01:29:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:29:26.058170 | orchestrator | 2025-04-13 01:29:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:29:29.101673 | orchestrator | 2025-04-13 01:29:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:29:29.101841 | orchestrator | 2025-04-13 01:29:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:29:32.150879 | orchestrator | 2025-04-13 01:29:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:29:32.151027 | orchestrator | 2025-04-13 01:29:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:29:35.200335 | orchestrator | 2025-04-13 01:29:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:29:35.200502 | orchestrator | 2025-04-13 01:29:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:29:38.250642 | orchestrator | 2025-04-13 01:29:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:29:38.250785 | orchestrator | 2025-04-13 01:29:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:29:41.306343 | orchestrator | 2025-04-13 01:29:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:29:41.306491 | orchestrator | 2025-04-13 01:29:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:29:44.358971 | orchestrator | 2025-04-13 01:29:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:29:44.359146 | orchestrator | 2025-04-13 01:29:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:29:47.398608 | orchestrator | 2025-04-13 01:29:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:29:47.398761 | orchestrator | 2025-04-13 01:29:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:29:50.453634 | orchestrator | 2025-04-13 01:29:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:29:50.453780 | orchestrator | 2025-04-13 01:29:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:29:53.504414 | orchestrator | 2025-04-13 01:29:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:29:53.504564 | orchestrator | 2025-04-13 01:29:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:29:56.555723 | orchestrator | 2025-04-13 01:29:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:29:56.555839 | orchestrator | 2025-04-13 01:29:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:29:59.600114 | orchestrator | 2025-04-13 01:29:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:29:59.600251 | orchestrator | 2025-04-13 01:29:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:30:02.651586 | orchestrator | 2025-04-13 01:29:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:30:02.651786 | orchestrator | 2025-04-13 01:30:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:30:05.702457 | orchestrator | 2025-04-13 01:30:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:30:05.702603 | orchestrator | 2025-04-13 01:30:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:30:08.748982 | orchestrator | 2025-04-13 01:30:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:30:08.749161 | orchestrator | 2025-04-13 01:30:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:30:11.803552 | orchestrator | 2025-04-13 01:30:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:30:11.803693 | orchestrator | 2025-04-13 01:30:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:30:14.854844 | orchestrator | 2025-04-13 01:30:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:30:14.854989 | orchestrator | 2025-04-13 01:30:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:30:17.898769 | orchestrator | 2025-04-13 01:30:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:30:17.898916 | orchestrator | 2025-04-13 01:30:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:30:20.937943 | orchestrator | 2025-04-13 01:30:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:30:20.938085 | orchestrator | 2025-04-13 01:30:20 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:30:23.995726 | orchestrator | 2025-04-13 01:30:20 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:30:23.995846 | orchestrator | 2025-04-13 01:30:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:30:27.046117 | orchestrator | 2025-04-13 01:30:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:30:27.046262 | orchestrator | 2025-04-13 01:30:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:30:30.084216 | orchestrator | 2025-04-13 01:30:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:30:30.084426 | orchestrator | 2025-04-13 01:30:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:30:33.133990 | orchestrator | 2025-04-13 01:30:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:30:33.134241 | orchestrator | 2025-04-13 01:30:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:30:36.188127 | orchestrator | 2025-04-13 01:30:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:30:36.188268 | orchestrator | 2025-04-13 01:30:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:30:39.236871 | orchestrator | 2025-04-13 01:30:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:30:39.237008 | orchestrator | 2025-04-13 01:30:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:30:42.287908 | orchestrator | 2025-04-13 01:30:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:30:42.288051 | orchestrator | 2025-04-13 01:30:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:30:45.336000 | orchestrator | 2025-04-13 01:30:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:30:45.336148 | orchestrator | 2025-04-13 01:30:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:30:48.384390 | orchestrator | 2025-04-13 01:30:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:30:48.384538 | orchestrator | 2025-04-13 01:30:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:30:51.437950 | orchestrator | 2025-04-13 01:30:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:30:51.438140 | orchestrator | 2025-04-13 01:30:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:30:54.488072 | orchestrator | 2025-04-13 01:30:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:30:54.488251 | orchestrator | 2025-04-13 01:30:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:30:57.536802 | orchestrator | 2025-04-13 01:30:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:30:57.536898 | orchestrator | 2025-04-13 01:30:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:31:00.585809 | orchestrator | 2025-04-13 01:30:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:31:00.585958 | orchestrator | 2025-04-13 01:31:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:31:03.630769 | orchestrator | 2025-04-13 01:31:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:31:03.630943 | orchestrator | 2025-04-13 01:31:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:31:06.686781 | orchestrator | 2025-04-13 01:31:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:31:06.686922 | orchestrator | 2025-04-13 01:31:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:31:09.738929 | orchestrator | 2025-04-13 01:31:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:31:09.739073 | orchestrator | 2025-04-13 01:31:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:31:12.789680 | orchestrator | 2025-04-13 01:31:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:31:12.789829 | orchestrator | 2025-04-13 01:31:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:31:15.841370 | orchestrator | 2025-04-13 01:31:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:31:15.841627 | orchestrator | 2025-04-13 01:31:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:31:18.886420 | orchestrator | 2025-04-13 01:31:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:31:18.886566 | orchestrator | 2025-04-13 01:31:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:31:21.938841 | orchestrator | 2025-04-13 01:31:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:31:21.939016 | orchestrator | 2025-04-13 01:31:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:31:24.990718 | orchestrator | 2025-04-13 01:31:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:31:24.990863 | orchestrator | 2025-04-13 01:31:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:31:28.048155 | orchestrator | 2025-04-13 01:31:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:31:28.048352 | orchestrator | 2025-04-13 01:31:28 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:31:31.096436 | orchestrator | 2025-04-13 01:31:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:31:31.096589 | orchestrator | 2025-04-13 01:31:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:31:34.142935 | orchestrator | 2025-04-13 01:31:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:31:34.143110 | orchestrator | 2025-04-13 01:31:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:31:37.198722 | orchestrator | 2025-04-13 01:31:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:31:37.199023 | orchestrator | 2025-04-13 01:31:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:31:40.240700 | orchestrator | 2025-04-13 01:31:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:31:40.240882 | orchestrator | 2025-04-13 01:31:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:31:43.293203 | orchestrator | 2025-04-13 01:31:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:31:43.293413 | orchestrator | 2025-04-13 01:31:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:31:46.336266 | orchestrator | 2025-04-13 01:31:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:31:46.336455 | orchestrator | 2025-04-13 01:31:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:31:49.385186 | orchestrator | 2025-04-13 01:31:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:31:49.385288 | orchestrator | 2025-04-13 01:31:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:31:52.427876 | orchestrator | 2025-04-13 01:31:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:31:52.428060 | orchestrator | 2025-04-13 01:31:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:31:55.477976 | orchestrator | 2025-04-13 01:31:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:31:55.478179 | orchestrator | 2025-04-13 01:31:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:31:58.523287 | orchestrator | 2025-04-13 01:31:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:31:58.523510 | orchestrator | 2025-04-13 01:31:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:32:01.569047 | orchestrator | 2025-04-13 01:31:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:32:01.569198 | orchestrator | 2025-04-13 01:32:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:32:04.625394 | orchestrator | 2025-04-13 01:32:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:32:04.625537 | orchestrator | 2025-04-13 01:32:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:32:07.676805 | orchestrator | 2025-04-13 01:32:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:32:07.676910 | orchestrator | 2025-04-13 01:32:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:32:10.743988 | orchestrator | 2025-04-13 01:32:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:32:10.744134 | orchestrator | 2025-04-13 01:32:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:32:13.805952 | orchestrator | 2025-04-13 01:32:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:32:13.806147 | orchestrator | 2025-04-13 01:32:13 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:32:16.854557 | orchestrator | 2025-04-13 01:32:13 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:32:16.854736 | orchestrator | 2025-04-13 01:32:16 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:32:19.910663 | orchestrator | 2025-04-13 01:32:16 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:32:19.910808 | orchestrator | 2025-04-13 01:32:19 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:32:22.962353 | orchestrator | 2025-04-13 01:32:19 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:32:22.962544 | orchestrator | 2025-04-13 01:32:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:32:26.019405 | orchestrator | 2025-04-13 01:32:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:32:26.019565 | orchestrator | 2025-04-13 01:32:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:32:29.068375 | orchestrator | 2025-04-13 01:32:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:32:29.068523 | orchestrator | 2025-04-13 01:32:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:32:32.111876 | orchestrator | 2025-04-13 01:32:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:32:32.112031 | orchestrator | 2025-04-13 01:32:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:32:35.161659 | orchestrator | 2025-04-13 01:32:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:32:35.161804 | orchestrator | 2025-04-13 01:32:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:32:38.230212 | orchestrator | 2025-04-13 01:32:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:32:38.230400 | orchestrator | 2025-04-13 01:32:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:32:41.273495 | orchestrator | 2025-04-13 01:32:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:32:41.273611 | orchestrator | 2025-04-13 01:32:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:32:44.336451 | orchestrator | 2025-04-13 01:32:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:32:44.336606 | orchestrator | 2025-04-13 01:32:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:32:47.392854 | orchestrator | 2025-04-13 01:32:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:32:47.393036 | orchestrator | 2025-04-13 01:32:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:32:50.440624 | orchestrator | 2025-04-13 01:32:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:32:50.440774 | orchestrator | 2025-04-13 01:32:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:32:53.489589 | orchestrator | 2025-04-13 01:32:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:32:53.489721 | orchestrator | 2025-04-13 01:32:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:32:56.537464 | orchestrator | 2025-04-13 01:32:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:32:56.537649 | orchestrator | 2025-04-13 01:32:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:32:59.594673 | orchestrator | 2025-04-13 01:32:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:32:59.594792 | orchestrator | 2025-04-13 01:32:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:33:02.645657 | orchestrator | 2025-04-13 01:32:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:33:02.645847 | orchestrator | 2025-04-13 01:33:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:33:05.694835 | orchestrator | 2025-04-13 01:33:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:33:05.694951 | orchestrator | 2025-04-13 01:33:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:33:08.740807 | orchestrator | 2025-04-13 01:33:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:33:08.740976 | orchestrator | 2025-04-13 01:33:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:33:11.787799 | orchestrator | 2025-04-13 01:33:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:33:11.787975 | orchestrator | 2025-04-13 01:33:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:33:14.846789 | orchestrator | 2025-04-13 01:33:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:33:14.846940 | orchestrator | 2025-04-13 01:33:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:33:17.904038 | orchestrator | 2025-04-13 01:33:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:33:17.904188 | orchestrator | 2025-04-13 01:33:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:33:20.952758 | orchestrator | 2025-04-13 01:33:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:33:20.952905 | orchestrator | 2025-04-13 01:33:20 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:33:24.012832 | orchestrator | 2025-04-13 01:33:20 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:33:24.012984 | orchestrator | 2025-04-13 01:33:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:33:24.015103 | orchestrator | 2025-04-13 01:33:24 | INFO  | Task 6f066514-5a0f-4a5a-b695-6e5807ab694c is in state STARTED 2025-04-13 01:33:27.080182 | orchestrator | 2025-04-13 01:33:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:33:27.080717 | orchestrator | 2025-04-13 01:33:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:33:30.150999 | orchestrator | 2025-04-13 01:33:27 | INFO  | Task 6f066514-5a0f-4a5a-b695-6e5807ab694c is in state STARTED 2025-04-13 01:33:30.151130 | orchestrator | 2025-04-13 01:33:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:33:30.151184 | orchestrator | 2025-04-13 01:33:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:33:30.152650 | orchestrator | 2025-04-13 01:33:30 | INFO  | Task 6f066514-5a0f-4a5a-b695-6e5807ab694c is in state STARTED 2025-04-13 01:33:30.153366 | orchestrator | 2025-04-13 01:33:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:33:33.211949 | orchestrator | 2025-04-13 01:33:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:33:33.213128 | orchestrator | 2025-04-13 01:33:33 | INFO  | Task 6f066514-5a0f-4a5a-b695-6e5807ab694c is in state STARTED 2025-04-13 01:33:36.268946 | orchestrator | 2025-04-13 01:33:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:33:36.269095 | orchestrator | 2025-04-13 01:33:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:33:36.269914 | orchestrator | 2025-04-13 01:33:36 | INFO  | Task 6f066514-5a0f-4a5a-b695-6e5807ab694c is in state SUCCESS 2025-04-13 01:33:39.320835 | orchestrator | 2025-04-13 01:33:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:33:39.320980 | orchestrator | 2025-04-13 01:33:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:33:42.368299 | orchestrator | 2025-04-13 01:33:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:33:42.368550 | orchestrator | 2025-04-13 01:33:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:33:45.421621 | orchestrator | 2025-04-13 01:33:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:33:45.421776 | orchestrator | 2025-04-13 01:33:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:33:48.472309 | orchestrator | 2025-04-13 01:33:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:33:48.472452 | orchestrator | 2025-04-13 01:33:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:33:51.522998 | orchestrator | 2025-04-13 01:33:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:33:51.523145 | orchestrator | 2025-04-13 01:33:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:33:54.566219 | orchestrator | 2025-04-13 01:33:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:33:54.566499 | orchestrator | 2025-04-13 01:33:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:33:57.614235 | orchestrator | 2025-04-13 01:33:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:33:57.614384 | orchestrator | 2025-04-13 01:33:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:34:00.664505 | orchestrator | 2025-04-13 01:33:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:34:00.664651 | orchestrator | 2025-04-13 01:34:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:34:03.717003 | orchestrator | 2025-04-13 01:34:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:34:03.717200 | orchestrator | 2025-04-13 01:34:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:34:06.762445 | orchestrator | 2025-04-13 01:34:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:34:06.762594 | orchestrator | 2025-04-13 01:34:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:34:09.811047 | orchestrator | 2025-04-13 01:34:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:34:09.811194 | orchestrator | 2025-04-13 01:34:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:34:12.856060 | orchestrator | 2025-04-13 01:34:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:34:12.856207 | orchestrator | 2025-04-13 01:34:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:34:15.903208 | orchestrator | 2025-04-13 01:34:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:34:15.903442 | orchestrator | 2025-04-13 01:34:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:34:18.965855 | orchestrator | 2025-04-13 01:34:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:34:18.966001 | orchestrator | 2025-04-13 01:34:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:34:22.023756 | orchestrator | 2025-04-13 01:34:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:34:22.023871 | orchestrator | 2025-04-13 01:34:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:34:25.090340 | orchestrator | 2025-04-13 01:34:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:34:25.090540 | orchestrator | 2025-04-13 01:34:25 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:34:28.136699 | orchestrator | 2025-04-13 01:34:25 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:34:28.136857 | orchestrator | 2025-04-13 01:34:28 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:34:28.136976 | orchestrator | 2025-04-13 01:34:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:34:31.189805 | orchestrator | 2025-04-13 01:34:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:34:34.252202 | orchestrator | 2025-04-13 01:34:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:34:34.252437 | orchestrator | 2025-04-13 01:34:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:34:37.299960 | orchestrator | 2025-04-13 01:34:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:34:37.300104 | orchestrator | 2025-04-13 01:34:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:34:40.349186 | orchestrator | 2025-04-13 01:34:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:34:40.349398 | orchestrator | 2025-04-13 01:34:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:34:43.392242 | orchestrator | 2025-04-13 01:34:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:34:43.392446 | orchestrator | 2025-04-13 01:34:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:34:46.449567 | orchestrator | 2025-04-13 01:34:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:34:46.449676 | orchestrator | 2025-04-13 01:34:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:34:49.498344 | orchestrator | 2025-04-13 01:34:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:34:49.498547 | orchestrator | 2025-04-13 01:34:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:34:52.551157 | orchestrator | 2025-04-13 01:34:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:34:52.551302 | orchestrator | 2025-04-13 01:34:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:34:55.594000 | orchestrator | 2025-04-13 01:34:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:34:55.594211 | orchestrator | 2025-04-13 01:34:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:34:58.646430 | orchestrator | 2025-04-13 01:34:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:34:58.646576 | orchestrator | 2025-04-13 01:34:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:35:01.688431 | orchestrator | 2025-04-13 01:34:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:35:01.688588 | orchestrator | 2025-04-13 01:35:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:35:04.735564 | orchestrator | 2025-04-13 01:35:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:35:04.735678 | orchestrator | 2025-04-13 01:35:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:35:07.774440 | orchestrator | 2025-04-13 01:35:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:35:07.774598 | orchestrator | 2025-04-13 01:35:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:35:10.824336 | orchestrator | 2025-04-13 01:35:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:35:10.824517 | orchestrator | 2025-04-13 01:35:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:35:13.876111 | orchestrator | 2025-04-13 01:35:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:35:13.876254 | orchestrator | 2025-04-13 01:35:13 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:35:16.921536 | orchestrator | 2025-04-13 01:35:13 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:35:16.921733 | orchestrator | 2025-04-13 01:35:16 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:35:19.969120 | orchestrator | 2025-04-13 01:35:16 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:35:19.969295 | orchestrator | 2025-04-13 01:35:19 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:35:23.010443 | orchestrator | 2025-04-13 01:35:19 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:35:23.010580 | orchestrator | 2025-04-13 01:35:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:35:26.063250 | orchestrator | 2025-04-13 01:35:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:35:26.063466 | orchestrator | 2025-04-13 01:35:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:35:29.114456 | orchestrator | 2025-04-13 01:35:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:35:29.114601 | orchestrator | 2025-04-13 01:35:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:35:32.168288 | orchestrator | 2025-04-13 01:35:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:35:32.168422 | orchestrator | 2025-04-13 01:35:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:35:35.215155 | orchestrator | 2025-04-13 01:35:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:35:35.215294 | orchestrator | 2025-04-13 01:35:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:35:38.263574 | orchestrator | 2025-04-13 01:35:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:35:38.263698 | orchestrator | 2025-04-13 01:35:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:35:41.305345 | orchestrator | 2025-04-13 01:35:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:35:41.305541 | orchestrator | 2025-04-13 01:35:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:35:44.364660 | orchestrator | 2025-04-13 01:35:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:35:44.364848 | orchestrator | 2025-04-13 01:35:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:35:47.412037 | orchestrator | 2025-04-13 01:35:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:35:47.412180 | orchestrator | 2025-04-13 01:35:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:35:50.458654 | orchestrator | 2025-04-13 01:35:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:35:50.458804 | orchestrator | 2025-04-13 01:35:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:35:53.504094 | orchestrator | 2025-04-13 01:35:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:35:53.504237 | orchestrator | 2025-04-13 01:35:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:35:56.563455 | orchestrator | 2025-04-13 01:35:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:35:56.563591 | orchestrator | 2025-04-13 01:35:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:35:59.609917 | orchestrator | 2025-04-13 01:35:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:35:59.610134 | orchestrator | 2025-04-13 01:35:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:36:02.664031 | orchestrator | 2025-04-13 01:35:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:36:02.664183 | orchestrator | 2025-04-13 01:36:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:36:05.707279 | orchestrator | 2025-04-13 01:36:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:36:05.707469 | orchestrator | 2025-04-13 01:36:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:36:08.758260 | orchestrator | 2025-04-13 01:36:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:36:08.758530 | orchestrator | 2025-04-13 01:36:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:36:11.802985 | orchestrator | 2025-04-13 01:36:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:36:11.803082 | orchestrator | 2025-04-13 01:36:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:36:14.851571 | orchestrator | 2025-04-13 01:36:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:36:14.851719 | orchestrator | 2025-04-13 01:36:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:36:17.894237 | orchestrator | 2025-04-13 01:36:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:36:17.894431 | orchestrator | 2025-04-13 01:36:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:36:20.942736 | orchestrator | 2025-04-13 01:36:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:36:20.942863 | orchestrator | 2025-04-13 01:36:20 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:36:23.997019 | orchestrator | 2025-04-13 01:36:20 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:36:23.997158 | orchestrator | 2025-04-13 01:36:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:36:23.997439 | orchestrator | 2025-04-13 01:36:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:36:27.046880 | orchestrator | 2025-04-13 01:36:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:36:30.097343 | orchestrator | 2025-04-13 01:36:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:36:30.097615 | orchestrator | 2025-04-13 01:36:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:36:33.144077 | orchestrator | 2025-04-13 01:36:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:36:33.144276 | orchestrator | 2025-04-13 01:36:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:36:36.195486 | orchestrator | 2025-04-13 01:36:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:36:36.195624 | orchestrator | 2025-04-13 01:36:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:36:39.234645 | orchestrator | 2025-04-13 01:36:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:36:39.234799 | orchestrator | 2025-04-13 01:36:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:36:42.280155 | orchestrator | 2025-04-13 01:36:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:36:42.280267 | orchestrator | 2025-04-13 01:36:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:36:45.331962 | orchestrator | 2025-04-13 01:36:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:36:45.332103 | orchestrator | 2025-04-13 01:36:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:36:48.385112 | orchestrator | 2025-04-13 01:36:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:36:48.385256 | orchestrator | 2025-04-13 01:36:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:36:51.432154 | orchestrator | 2025-04-13 01:36:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:36:51.432332 | orchestrator | 2025-04-13 01:36:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:36:54.480232 | orchestrator | 2025-04-13 01:36:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:36:54.480332 | orchestrator | 2025-04-13 01:36:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:36:57.530951 | orchestrator | 2025-04-13 01:36:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:36:57.531106 | orchestrator | 2025-04-13 01:36:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:37:00.585637 | orchestrator | 2025-04-13 01:36:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:37:00.585775 | orchestrator | 2025-04-13 01:37:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:37:03.633053 | orchestrator | 2025-04-13 01:37:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:37:03.633168 | orchestrator | 2025-04-13 01:37:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:37:06.691213 | orchestrator | 2025-04-13 01:37:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:37:06.691441 | orchestrator | 2025-04-13 01:37:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:37:09.743803 | orchestrator | 2025-04-13 01:37:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:37:09.743957 | orchestrator | 2025-04-13 01:37:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:37:12.798457 | orchestrator | 2025-04-13 01:37:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:37:12.798577 | orchestrator | 2025-04-13 01:37:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:37:15.850176 | orchestrator | 2025-04-13 01:37:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:37:15.850323 | orchestrator | 2025-04-13 01:37:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:37:18.906750 | orchestrator | 2025-04-13 01:37:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:37:18.906915 | orchestrator | 2025-04-13 01:37:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:37:21.946802 | orchestrator | 2025-04-13 01:37:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:37:21.946952 | orchestrator | 2025-04-13 01:37:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:37:25.001237 | orchestrator | 2025-04-13 01:37:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:37:25.001335 | orchestrator | 2025-04-13 01:37:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:37:28.069474 | orchestrator | 2025-04-13 01:37:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:37:28.069622 | orchestrator | 2025-04-13 01:37:28 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:37:31.121172 | orchestrator | 2025-04-13 01:37:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:37:31.121330 | orchestrator | 2025-04-13 01:37:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:37:34.170693 | orchestrator | 2025-04-13 01:37:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:37:34.170842 | orchestrator | 2025-04-13 01:37:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:37:37.231321 | orchestrator | 2025-04-13 01:37:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:37:37.231525 | orchestrator | 2025-04-13 01:37:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:37:40.283832 | orchestrator | 2025-04-13 01:37:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:37:40.283968 | orchestrator | 2025-04-13 01:37:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:37:43.330202 | orchestrator | 2025-04-13 01:37:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:37:43.330339 | orchestrator | 2025-04-13 01:37:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:37:46.393434 | orchestrator | 2025-04-13 01:37:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:37:46.393595 | orchestrator | 2025-04-13 01:37:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:37:49.437250 | orchestrator | 2025-04-13 01:37:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:37:49.437481 | orchestrator | 2025-04-13 01:37:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:37:52.478620 | orchestrator | 2025-04-13 01:37:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:37:52.478721 | orchestrator | 2025-04-13 01:37:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:37:55.530593 | orchestrator | 2025-04-13 01:37:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:37:55.530723 | orchestrator | 2025-04-13 01:37:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:37:58.587150 | orchestrator | 2025-04-13 01:37:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:37:58.587307 | orchestrator | 2025-04-13 01:37:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:38:01.644420 | orchestrator | 2025-04-13 01:37:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:38:01.644561 | orchestrator | 2025-04-13 01:38:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:38:04.698345 | orchestrator | 2025-04-13 01:38:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:38:04.698529 | orchestrator | 2025-04-13 01:38:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:38:07.743963 | orchestrator | 2025-04-13 01:38:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:38:07.744056 | orchestrator | 2025-04-13 01:38:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:38:10.802682 | orchestrator | 2025-04-13 01:38:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:38:10.802829 | orchestrator | 2025-04-13 01:38:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:38:13.855635 | orchestrator | 2025-04-13 01:38:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:38:13.855747 | orchestrator | 2025-04-13 01:38:13 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:38:16.918707 | orchestrator | 2025-04-13 01:38:13 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:38:16.918890 | orchestrator | 2025-04-13 01:38:16 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:38:19.966269 | orchestrator | 2025-04-13 01:38:16 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:38:19.966474 | orchestrator | 2025-04-13 01:38:19 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:38:23.034212 | orchestrator | 2025-04-13 01:38:19 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:38:23.034481 | orchestrator | 2025-04-13 01:38:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:38:26.085827 | orchestrator | 2025-04-13 01:38:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:38:26.085970 | orchestrator | 2025-04-13 01:38:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:38:29.140844 | orchestrator | 2025-04-13 01:38:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:38:29.140995 | orchestrator | 2025-04-13 01:38:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:38:32.195192 | orchestrator | 2025-04-13 01:38:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:38:32.195328 | orchestrator | 2025-04-13 01:38:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:38:35.237048 | orchestrator | 2025-04-13 01:38:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:38:35.237231 | orchestrator | 2025-04-13 01:38:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:38:38.286629 | orchestrator | 2025-04-13 01:38:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:38:38.286726 | orchestrator | 2025-04-13 01:38:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:38:41.332530 | orchestrator | 2025-04-13 01:38:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:38:41.332683 | orchestrator | 2025-04-13 01:38:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:38:44.393585 | orchestrator | 2025-04-13 01:38:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:38:44.393687 | orchestrator | 2025-04-13 01:38:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:38:47.441151 | orchestrator | 2025-04-13 01:38:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:38:47.441302 | orchestrator | 2025-04-13 01:38:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:38:50.496132 | orchestrator | 2025-04-13 01:38:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:38:50.496226 | orchestrator | 2025-04-13 01:38:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:38:53.553551 | orchestrator | 2025-04-13 01:38:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:38:53.553702 | orchestrator | 2025-04-13 01:38:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:38:56.608388 | orchestrator | 2025-04-13 01:38:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:38:56.608595 | orchestrator | 2025-04-13 01:38:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:38:59.660986 | orchestrator | 2025-04-13 01:38:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:38:59.661126 | orchestrator | 2025-04-13 01:38:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:39:02.713362 | orchestrator | 2025-04-13 01:38:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:39:02.713579 | orchestrator | 2025-04-13 01:39:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:39:05.765969 | orchestrator | 2025-04-13 01:39:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:39:05.766207 | orchestrator | 2025-04-13 01:39:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:39:08.813829 | orchestrator | 2025-04-13 01:39:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:39:08.814001 | orchestrator | 2025-04-13 01:39:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:39:11.864927 | orchestrator | 2025-04-13 01:39:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:39:11.865076 | orchestrator | 2025-04-13 01:39:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:39:14.911734 | orchestrator | 2025-04-13 01:39:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:39:14.911871 | orchestrator | 2025-04-13 01:39:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:39:17.957932 | orchestrator | 2025-04-13 01:39:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:39:17.958135 | orchestrator | 2025-04-13 01:39:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:39:21.014980 | orchestrator | 2025-04-13 01:39:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:39:21.015121 | orchestrator | 2025-04-13 01:39:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:39:24.067886 | orchestrator | 2025-04-13 01:39:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:39:24.068060 | orchestrator | 2025-04-13 01:39:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:39:27.112576 | orchestrator | 2025-04-13 01:39:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:39:27.112716 | orchestrator | 2025-04-13 01:39:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:39:30.173683 | orchestrator | 2025-04-13 01:39:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:39:30.173837 | orchestrator | 2025-04-13 01:39:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:39:33.217506 | orchestrator | 2025-04-13 01:39:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:39:33.217651 | orchestrator | 2025-04-13 01:39:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:39:36.273245 | orchestrator | 2025-04-13 01:39:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:39:36.273357 | orchestrator | 2025-04-13 01:39:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:39:39.325361 | orchestrator | 2025-04-13 01:39:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:39:39.325550 | orchestrator | 2025-04-13 01:39:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:39:42.373592 | orchestrator | 2025-04-13 01:39:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:39:42.373740 | orchestrator | 2025-04-13 01:39:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:39:45.418161 | orchestrator | 2025-04-13 01:39:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:39:45.418300 | orchestrator | 2025-04-13 01:39:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:39:48.467030 | orchestrator | 2025-04-13 01:39:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:39:48.467187 | orchestrator | 2025-04-13 01:39:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:39:51.512779 | orchestrator | 2025-04-13 01:39:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:39:51.512952 | orchestrator | 2025-04-13 01:39:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:39:54.561498 | orchestrator | 2025-04-13 01:39:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:39:54.561680 | orchestrator | 2025-04-13 01:39:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:39:57.613564 | orchestrator | 2025-04-13 01:39:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:39:57.613703 | orchestrator | 2025-04-13 01:39:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:40:00.656055 | orchestrator | 2025-04-13 01:39:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:40:00.656222 | orchestrator | 2025-04-13 01:40:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:40:03.705628 | orchestrator | 2025-04-13 01:40:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:40:03.705766 | orchestrator | 2025-04-13 01:40:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:40:06.752186 | orchestrator | 2025-04-13 01:40:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:40:06.752339 | orchestrator | 2025-04-13 01:40:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:40:09.803158 | orchestrator | 2025-04-13 01:40:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:40:09.803311 | orchestrator | 2025-04-13 01:40:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:40:12.849255 | orchestrator | 2025-04-13 01:40:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:40:12.849475 | orchestrator | 2025-04-13 01:40:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:40:15.893454 | orchestrator | 2025-04-13 01:40:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:40:15.893574 | orchestrator | 2025-04-13 01:40:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:40:18.936324 | orchestrator | 2025-04-13 01:40:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:40:18.936494 | orchestrator | 2025-04-13 01:40:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:40:21.987299 | orchestrator | 2025-04-13 01:40:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:40:21.987490 | orchestrator | 2025-04-13 01:40:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:40:25.037326 | orchestrator | 2025-04-13 01:40:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:40:25.037529 | orchestrator | 2025-04-13 01:40:25 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:40:28.084873 | orchestrator | 2025-04-13 01:40:25 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:40:28.085014 | orchestrator | 2025-04-13 01:40:28 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:40:31.136071 | orchestrator | 2025-04-13 01:40:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:40:31.136222 | orchestrator | 2025-04-13 01:40:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:40:34.177095 | orchestrator | 2025-04-13 01:40:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:40:34.177204 | orchestrator | 2025-04-13 01:40:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:40:37.227956 | orchestrator | 2025-04-13 01:40:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:40:37.228081 | orchestrator | 2025-04-13 01:40:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:40:40.271979 | orchestrator | 2025-04-13 01:40:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:40:40.272104 | orchestrator | 2025-04-13 01:40:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:40:43.332328 | orchestrator | 2025-04-13 01:40:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:40:43.332585 | orchestrator | 2025-04-13 01:40:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:40:46.371538 | orchestrator | 2025-04-13 01:40:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:40:46.371683 | orchestrator | 2025-04-13 01:40:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:40:49.410939 | orchestrator | 2025-04-13 01:40:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:40:49.411094 | orchestrator | 2025-04-13 01:40:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:40:52.458175 | orchestrator | 2025-04-13 01:40:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:40:52.458310 | orchestrator | 2025-04-13 01:40:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:40:55.499923 | orchestrator | 2025-04-13 01:40:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:40:55.500077 | orchestrator | 2025-04-13 01:40:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:40:58.550878 | orchestrator | 2025-04-13 01:40:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:40:58.551050 | orchestrator | 2025-04-13 01:40:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:41:01.614588 | orchestrator | 2025-04-13 01:40:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:41:01.614744 | orchestrator | 2025-04-13 01:41:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:41:04.663494 | orchestrator | 2025-04-13 01:41:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:41:04.663641 | orchestrator | 2025-04-13 01:41:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:41:07.713103 | orchestrator | 2025-04-13 01:41:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:41:07.713251 | orchestrator | 2025-04-13 01:41:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:41:10.764191 | orchestrator | 2025-04-13 01:41:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:41:10.764336 | orchestrator | 2025-04-13 01:41:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:41:13.806272 | orchestrator | 2025-04-13 01:41:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:41:13.806410 | orchestrator | 2025-04-13 01:41:13 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:41:16.863057 | orchestrator | 2025-04-13 01:41:13 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:41:16.863199 | orchestrator | 2025-04-13 01:41:16 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:41:19.921871 | orchestrator | 2025-04-13 01:41:16 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:41:19.922079 | orchestrator | 2025-04-13 01:41:19 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:41:22.971223 | orchestrator | 2025-04-13 01:41:19 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:41:22.971392 | orchestrator | 2025-04-13 01:41:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:41:26.026832 | orchestrator | 2025-04-13 01:41:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:41:26.026978 | orchestrator | 2025-04-13 01:41:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:41:29.080654 | orchestrator | 2025-04-13 01:41:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:41:29.080788 | orchestrator | 2025-04-13 01:41:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:41:32.133868 | orchestrator | 2025-04-13 01:41:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:41:32.134074 | orchestrator | 2025-04-13 01:41:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:41:35.178397 | orchestrator | 2025-04-13 01:41:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:41:35.178603 | orchestrator | 2025-04-13 01:41:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:41:38.230541 | orchestrator | 2025-04-13 01:41:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:41:38.230693 | orchestrator | 2025-04-13 01:41:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:41:41.278937 | orchestrator | 2025-04-13 01:41:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:41:41.279080 | orchestrator | 2025-04-13 01:41:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:41:44.329538 | orchestrator | 2025-04-13 01:41:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:41:44.329641 | orchestrator | 2025-04-13 01:41:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:41:47.378309 | orchestrator | 2025-04-13 01:41:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:41:47.378510 | orchestrator | 2025-04-13 01:41:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:41:50.430313 | orchestrator | 2025-04-13 01:41:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:41:50.430467 | orchestrator | 2025-04-13 01:41:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:41:53.472216 | orchestrator | 2025-04-13 01:41:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:41:53.472358 | orchestrator | 2025-04-13 01:41:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:41:56.523764 | orchestrator | 2025-04-13 01:41:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:41:56.523896 | orchestrator | 2025-04-13 01:41:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:41:59.577162 | orchestrator | 2025-04-13 01:41:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:41:59.577316 | orchestrator | 2025-04-13 01:41:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:42:02.629160 | orchestrator | 2025-04-13 01:41:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:42:02.629312 | orchestrator | 2025-04-13 01:42:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:42:05.674798 | orchestrator | 2025-04-13 01:42:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:42:05.674941 | orchestrator | 2025-04-13 01:42:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:42:08.724212 | orchestrator | 2025-04-13 01:42:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:42:08.724321 | orchestrator | 2025-04-13 01:42:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:42:11.777159 | orchestrator | 2025-04-13 01:42:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:42:11.777278 | orchestrator | 2025-04-13 01:42:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:42:14.833644 | orchestrator | 2025-04-13 01:42:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:42:14.833820 | orchestrator | 2025-04-13 01:42:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:42:17.902631 | orchestrator | 2025-04-13 01:42:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:42:17.902751 | orchestrator | 2025-04-13 01:42:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:42:20.949643 | orchestrator | 2025-04-13 01:42:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:42:20.949798 | orchestrator | 2025-04-13 01:42:20 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:42:24.005918 | orchestrator | 2025-04-13 01:42:20 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:42:24.006116 | orchestrator | 2025-04-13 01:42:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:42:27.058993 | orchestrator | 2025-04-13 01:42:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:42:27.059141 | orchestrator | 2025-04-13 01:42:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:42:30.108756 | orchestrator | 2025-04-13 01:42:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:42:30.108907 | orchestrator | 2025-04-13 01:42:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:42:33.167277 | orchestrator | 2025-04-13 01:42:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:42:33.167471 | orchestrator | 2025-04-13 01:42:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:42:36.222097 | orchestrator | 2025-04-13 01:42:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:42:36.222241 | orchestrator | 2025-04-13 01:42:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:42:39.276909 | orchestrator | 2025-04-13 01:42:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:42:39.277028 | orchestrator | 2025-04-13 01:42:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:42:42.324943 | orchestrator | 2025-04-13 01:42:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:42:42.325108 | orchestrator | 2025-04-13 01:42:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:42:45.380714 | orchestrator | 2025-04-13 01:42:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:42:45.380870 | orchestrator | 2025-04-13 01:42:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:42:48.431631 | orchestrator | 2025-04-13 01:42:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:42:48.431776 | orchestrator | 2025-04-13 01:42:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:42:51.484692 | orchestrator | 2025-04-13 01:42:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:42:51.484826 | orchestrator | 2025-04-13 01:42:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:42:54.531901 | orchestrator | 2025-04-13 01:42:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:42:54.532025 | orchestrator | 2025-04-13 01:42:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:42:57.586568 | orchestrator | 2025-04-13 01:42:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:42:57.586715 | orchestrator | 2025-04-13 01:42:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:43:00.632508 | orchestrator | 2025-04-13 01:42:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:43:00.632649 | orchestrator | 2025-04-13 01:43:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:43:03.681784 | orchestrator | 2025-04-13 01:43:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:43:03.681935 | orchestrator | 2025-04-13 01:43:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:43:06.738215 | orchestrator | 2025-04-13 01:43:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:43:06.738357 | orchestrator | 2025-04-13 01:43:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:43:09.785941 | orchestrator | 2025-04-13 01:43:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:43:09.786148 | orchestrator | 2025-04-13 01:43:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:43:12.837286 | orchestrator | 2025-04-13 01:43:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:43:12.837488 | orchestrator | 2025-04-13 01:43:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:43:15.898867 | orchestrator | 2025-04-13 01:43:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:43:15.899020 | orchestrator | 2025-04-13 01:43:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:43:18.955188 | orchestrator | 2025-04-13 01:43:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:43:18.955335 | orchestrator | 2025-04-13 01:43:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:43:22.008734 | orchestrator | 2025-04-13 01:43:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:43:22.008886 | orchestrator | 2025-04-13 01:43:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:43:25.066820 | orchestrator | 2025-04-13 01:43:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:43:25.066930 | orchestrator | 2025-04-13 01:43:25 | INFO  | Task ef0b3244-9efe-4534-b1fe-a3a50527f0dc is in state STARTED 2025-04-13 01:43:25.068997 | orchestrator | 2025-04-13 01:43:25 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:43:28.123288 | orchestrator | 2025-04-13 01:43:25 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:43:28.123499 | orchestrator | 2025-04-13 01:43:28 | INFO  | Task ef0b3244-9efe-4534-b1fe-a3a50527f0dc is in state STARTED 2025-04-13 01:43:28.124799 | orchestrator | 2025-04-13 01:43:28 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:43:28.125001 | orchestrator | 2025-04-13 01:43:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:43:31.182201 | orchestrator | 2025-04-13 01:43:31 | INFO  | Task ef0b3244-9efe-4534-b1fe-a3a50527f0dc is in state STARTED 2025-04-13 01:43:31.184310 | orchestrator | 2025-04-13 01:43:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:43:34.232753 | orchestrator | 2025-04-13 01:43:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:43:34.232897 | orchestrator | 2025-04-13 01:43:34 | INFO  | Task ef0b3244-9efe-4534-b1fe-a3a50527f0dc is in state SUCCESS 2025-04-13 01:43:34.233580 | orchestrator | 2025-04-13 01:43:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:43:37.278665 | orchestrator | 2025-04-13 01:43:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:43:37.278761 | orchestrator | 2025-04-13 01:43:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:43:40.329595 | orchestrator | 2025-04-13 01:43:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:43:40.329752 | orchestrator | 2025-04-13 01:43:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:43:43.380228 | orchestrator | 2025-04-13 01:43:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:43:43.380370 | orchestrator | 2025-04-13 01:43:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:43:46.425621 | orchestrator | 2025-04-13 01:43:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:43:46.425728 | orchestrator | 2025-04-13 01:43:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:43:49.483343 | orchestrator | 2025-04-13 01:43:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:43:49.483532 | orchestrator | 2025-04-13 01:43:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:43:52.532207 | orchestrator | 2025-04-13 01:43:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:43:52.532358 | orchestrator | 2025-04-13 01:43:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:43:55.585896 | orchestrator | 2025-04-13 01:43:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:43:55.586095 | orchestrator | 2025-04-13 01:43:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:43:58.631009 | orchestrator | 2025-04-13 01:43:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:43:58.631122 | orchestrator | 2025-04-13 01:43:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:43:58.631206 | orchestrator | 2025-04-13 01:43:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:44:01.680633 | orchestrator | 2025-04-13 01:44:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:44:04.728026 | orchestrator | 2025-04-13 01:44:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:44:04.728177 | orchestrator | 2025-04-13 01:44:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:44:07.780888 | orchestrator | 2025-04-13 01:44:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:44:07.781032 | orchestrator | 2025-04-13 01:44:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:44:10.822921 | orchestrator | 2025-04-13 01:44:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:44:10.823069 | orchestrator | 2025-04-13 01:44:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:44:13.871618 | orchestrator | 2025-04-13 01:44:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:44:13.871761 | orchestrator | 2025-04-13 01:44:13 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:44:16.916416 | orchestrator | 2025-04-13 01:44:13 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:44:16.916594 | orchestrator | 2025-04-13 01:44:16 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:44:19.959933 | orchestrator | 2025-04-13 01:44:16 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:44:19.960074 | orchestrator | 2025-04-13 01:44:19 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:44:23.010957 | orchestrator | 2025-04-13 01:44:19 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:44:23.011067 | orchestrator | 2025-04-13 01:44:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:44:26.056222 | orchestrator | 2025-04-13 01:44:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:44:26.056365 | orchestrator | 2025-04-13 01:44:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:44:29.105793 | orchestrator | 2025-04-13 01:44:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:44:29.105981 | orchestrator | 2025-04-13 01:44:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:44:32.155642 | orchestrator | 2025-04-13 01:44:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:44:32.155732 | orchestrator | 2025-04-13 01:44:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:44:35.200026 | orchestrator | 2025-04-13 01:44:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:44:35.200170 | orchestrator | 2025-04-13 01:44:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:44:38.256140 | orchestrator | 2025-04-13 01:44:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:44:38.256321 | orchestrator | 2025-04-13 01:44:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:44:41.305417 | orchestrator | 2025-04-13 01:44:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:44:41.305628 | orchestrator | 2025-04-13 01:44:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:44:44.358796 | orchestrator | 2025-04-13 01:44:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:44:44.358943 | orchestrator | 2025-04-13 01:44:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:44:47.402708 | orchestrator | 2025-04-13 01:44:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:44:47.402852 | orchestrator | 2025-04-13 01:44:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:44:50.452383 | orchestrator | 2025-04-13 01:44:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:44:50.452629 | orchestrator | 2025-04-13 01:44:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:44:53.504382 | orchestrator | 2025-04-13 01:44:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:44:53.504573 | orchestrator | 2025-04-13 01:44:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:44:56.553378 | orchestrator | 2025-04-13 01:44:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:44:56.553569 | orchestrator | 2025-04-13 01:44:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:44:59.598547 | orchestrator | 2025-04-13 01:44:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:44:59.598666 | orchestrator | 2025-04-13 01:44:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:45:02.638867 | orchestrator | 2025-04-13 01:44:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:45:02.639009 | orchestrator | 2025-04-13 01:45:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:45:05.685017 | orchestrator | 2025-04-13 01:45:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:45:05.685205 | orchestrator | 2025-04-13 01:45:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:45:08.733132 | orchestrator | 2025-04-13 01:45:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:45:08.733272 | orchestrator | 2025-04-13 01:45:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:45:11.782926 | orchestrator | 2025-04-13 01:45:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:45:11.783069 | orchestrator | 2025-04-13 01:45:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:45:14.829590 | orchestrator | 2025-04-13 01:45:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:45:14.829747 | orchestrator | 2025-04-13 01:45:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:45:17.878935 | orchestrator | 2025-04-13 01:45:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:45:17.879097 | orchestrator | 2025-04-13 01:45:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:45:20.925252 | orchestrator | 2025-04-13 01:45:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:45:20.925364 | orchestrator | 2025-04-13 01:45:20 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:45:23.980588 | orchestrator | 2025-04-13 01:45:20 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:45:23.980704 | orchestrator | 2025-04-13 01:45:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:45:27.037563 | orchestrator | 2025-04-13 01:45:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:45:27.037710 | orchestrator | 2025-04-13 01:45:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:45:30.089577 | orchestrator | 2025-04-13 01:45:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:45:30.089727 | orchestrator | 2025-04-13 01:45:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:45:33.136041 | orchestrator | 2025-04-13 01:45:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:45:33.136205 | orchestrator | 2025-04-13 01:45:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:45:36.185829 | orchestrator | 2025-04-13 01:45:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:45:36.186014 | orchestrator | 2025-04-13 01:45:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:45:39.238412 | orchestrator | 2025-04-13 01:45:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:45:39.238613 | orchestrator | 2025-04-13 01:45:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:45:42.290364 | orchestrator | 2025-04-13 01:45:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:45:42.290604 | orchestrator | 2025-04-13 01:45:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:45:45.332837 | orchestrator | 2025-04-13 01:45:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:45:45.332987 | orchestrator | 2025-04-13 01:45:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:45:48.380242 | orchestrator | 2025-04-13 01:45:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:45:48.380378 | orchestrator | 2025-04-13 01:45:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:45:51.428984 | orchestrator | 2025-04-13 01:45:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:45:51.429132 | orchestrator | 2025-04-13 01:45:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:45:54.474526 | orchestrator | 2025-04-13 01:45:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:45:54.474667 | orchestrator | 2025-04-13 01:45:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:45:57.521322 | orchestrator | 2025-04-13 01:45:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:45:57.521519 | orchestrator | 2025-04-13 01:45:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:46:00.582973 | orchestrator | 2025-04-13 01:45:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:46:00.583131 | orchestrator | 2025-04-13 01:46:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:46:03.636863 | orchestrator | 2025-04-13 01:46:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:46:03.637039 | orchestrator | 2025-04-13 01:46:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:46:06.695256 | orchestrator | 2025-04-13 01:46:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:46:06.695403 | orchestrator | 2025-04-13 01:46:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:46:09.741859 | orchestrator | 2025-04-13 01:46:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:46:09.742082 | orchestrator | 2025-04-13 01:46:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:46:12.789683 | orchestrator | 2025-04-13 01:46:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:46:12.789842 | orchestrator | 2025-04-13 01:46:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:46:15.842952 | orchestrator | 2025-04-13 01:46:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:46:15.843131 | orchestrator | 2025-04-13 01:46:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:46:18.893189 | orchestrator | 2025-04-13 01:46:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:46:18.893330 | orchestrator | 2025-04-13 01:46:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:46:21.940419 | orchestrator | 2025-04-13 01:46:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:46:21.940638 | orchestrator | 2025-04-13 01:46:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:46:24.990527 | orchestrator | 2025-04-13 01:46:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:46:24.990675 | orchestrator | 2025-04-13 01:46:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:46:28.045298 | orchestrator | 2025-04-13 01:46:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:46:28.045499 | orchestrator | 2025-04-13 01:46:28 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:46:31.098088 | orchestrator | 2025-04-13 01:46:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:46:31.098228 | orchestrator | 2025-04-13 01:46:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:46:34.149697 | orchestrator | 2025-04-13 01:46:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:46:34.149840 | orchestrator | 2025-04-13 01:46:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:46:37.202329 | orchestrator | 2025-04-13 01:46:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:46:37.202540 | orchestrator | 2025-04-13 01:46:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:46:40.249291 | orchestrator | 2025-04-13 01:46:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:46:40.249528 | orchestrator | 2025-04-13 01:46:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:46:43.300533 | orchestrator | 2025-04-13 01:46:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:46:43.300686 | orchestrator | 2025-04-13 01:46:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:46:46.350238 | orchestrator | 2025-04-13 01:46:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:46:46.350387 | orchestrator | 2025-04-13 01:46:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:46:49.410872 | orchestrator | 2025-04-13 01:46:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:46:49.411027 | orchestrator | 2025-04-13 01:46:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:46:52.443244 | orchestrator | 2025-04-13 01:46:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:46:52.443393 | orchestrator | 2025-04-13 01:46:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:46:55.475778 | orchestrator | 2025-04-13 01:46:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:46:55.475923 | orchestrator | 2025-04-13 01:46:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:46:58.523178 | orchestrator | 2025-04-13 01:46:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:46:58.523322 | orchestrator | 2025-04-13 01:46:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:47:01.576151 | orchestrator | 2025-04-13 01:46:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:47:01.576318 | orchestrator | 2025-04-13 01:47:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:47:04.630726 | orchestrator | 2025-04-13 01:47:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:47:04.630861 | orchestrator | 2025-04-13 01:47:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:47:07.678814 | orchestrator | 2025-04-13 01:47:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:47:07.678965 | orchestrator | 2025-04-13 01:47:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:47:10.727826 | orchestrator | 2025-04-13 01:47:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:47:10.727996 | orchestrator | 2025-04-13 01:47:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:47:13.776996 | orchestrator | 2025-04-13 01:47:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:47:13.777150 | orchestrator | 2025-04-13 01:47:13 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:47:16.822589 | orchestrator | 2025-04-13 01:47:13 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:47:16.822738 | orchestrator | 2025-04-13 01:47:16 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:47:19.880709 | orchestrator | 2025-04-13 01:47:16 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:47:19.880857 | orchestrator | 2025-04-13 01:47:19 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:47:22.932864 | orchestrator | 2025-04-13 01:47:19 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:47:22.933043 | orchestrator | 2025-04-13 01:47:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:47:25.985500 | orchestrator | 2025-04-13 01:47:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:47:25.985646 | orchestrator | 2025-04-13 01:47:25 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:47:29.037031 | orchestrator | 2025-04-13 01:47:25 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:47:29.037238 | orchestrator | 2025-04-13 01:47:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:47:32.083779 | orchestrator | 2025-04-13 01:47:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:47:32.083904 | orchestrator | 2025-04-13 01:47:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:47:35.133124 | orchestrator | 2025-04-13 01:47:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:47:35.133222 | orchestrator | 2025-04-13 01:47:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:47:38.177863 | orchestrator | 2025-04-13 01:47:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:47:38.178008 | orchestrator | 2025-04-13 01:47:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:47:41.224588 | orchestrator | 2025-04-13 01:47:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:47:41.224732 | orchestrator | 2025-04-13 01:47:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:47:44.274879 | orchestrator | 2025-04-13 01:47:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:47:44.275032 | orchestrator | 2025-04-13 01:47:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:47:47.325696 | orchestrator | 2025-04-13 01:47:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:47:47.325818 | orchestrator | 2025-04-13 01:47:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:47:50.382723 | orchestrator | 2025-04-13 01:47:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:47:50.382888 | orchestrator | 2025-04-13 01:47:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:47:53.428969 | orchestrator | 2025-04-13 01:47:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:47:53.429134 | orchestrator | 2025-04-13 01:47:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:47:56.485652 | orchestrator | 2025-04-13 01:47:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:47:56.485797 | orchestrator | 2025-04-13 01:47:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:47:59.536679 | orchestrator | 2025-04-13 01:47:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:47:59.536840 | orchestrator | 2025-04-13 01:47:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:48:02.580045 | orchestrator | 2025-04-13 01:47:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:48:02.580181 | orchestrator | 2025-04-13 01:48:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:48:05.622491 | orchestrator | 2025-04-13 01:48:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:48:05.622645 | orchestrator | 2025-04-13 01:48:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:48:08.664678 | orchestrator | 2025-04-13 01:48:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:48:08.664821 | orchestrator | 2025-04-13 01:48:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:48:11.712059 | orchestrator | 2025-04-13 01:48:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:48:11.712206 | orchestrator | 2025-04-13 01:48:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:48:14.764951 | orchestrator | 2025-04-13 01:48:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:48:14.765095 | orchestrator | 2025-04-13 01:48:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:48:17.819305 | orchestrator | 2025-04-13 01:48:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:48:17.819516 | orchestrator | 2025-04-13 01:48:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:48:20.865326 | orchestrator | 2025-04-13 01:48:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:48:20.865548 | orchestrator | 2025-04-13 01:48:20 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:48:23.919550 | orchestrator | 2025-04-13 01:48:20 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:48:23.919677 | orchestrator | 2025-04-13 01:48:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:48:26.964834 | orchestrator | 2025-04-13 01:48:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:48:26.964977 | orchestrator | 2025-04-13 01:48:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:48:30.018715 | orchestrator | 2025-04-13 01:48:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:48:30.018851 | orchestrator | 2025-04-13 01:48:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:48:33.071535 | orchestrator | 2025-04-13 01:48:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:48:33.071675 | orchestrator | 2025-04-13 01:48:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:48:36.123687 | orchestrator | 2025-04-13 01:48:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:48:36.123838 | orchestrator | 2025-04-13 01:48:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:48:39.174722 | orchestrator | 2025-04-13 01:48:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:48:39.174898 | orchestrator | 2025-04-13 01:48:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:48:42.221906 | orchestrator | 2025-04-13 01:48:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:48:42.222115 | orchestrator | 2025-04-13 01:48:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:48:45.271286 | orchestrator | 2025-04-13 01:48:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:48:45.271554 | orchestrator | 2025-04-13 01:48:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:48:48.319674 | orchestrator | 2025-04-13 01:48:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:48:48.319863 | orchestrator | 2025-04-13 01:48:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:48:51.362089 | orchestrator | 2025-04-13 01:48:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:48:51.362241 | orchestrator | 2025-04-13 01:48:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:48:54.410751 | orchestrator | 2025-04-13 01:48:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:48:54.410952 | orchestrator | 2025-04-13 01:48:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:48:57.469150 | orchestrator | 2025-04-13 01:48:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:48:57.469295 | orchestrator | 2025-04-13 01:48:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:49:00.515945 | orchestrator | 2025-04-13 01:48:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:49:00.516089 | orchestrator | 2025-04-13 01:49:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:49:03.564646 | orchestrator | 2025-04-13 01:49:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:49:03.564757 | orchestrator | 2025-04-13 01:49:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:49:06.615903 | orchestrator | 2025-04-13 01:49:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:49:06.616091 | orchestrator | 2025-04-13 01:49:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:49:09.667007 | orchestrator | 2025-04-13 01:49:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:49:09.667152 | orchestrator | 2025-04-13 01:49:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:49:12.719011 | orchestrator | 2025-04-13 01:49:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:49:12.719157 | orchestrator | 2025-04-13 01:49:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:49:15.770125 | orchestrator | 2025-04-13 01:49:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:49:15.770287 | orchestrator | 2025-04-13 01:49:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:49:18.820886 | orchestrator | 2025-04-13 01:49:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:49:18.821029 | orchestrator | 2025-04-13 01:49:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:49:21.880158 | orchestrator | 2025-04-13 01:49:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:49:21.880304 | orchestrator | 2025-04-13 01:49:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:49:24.931690 | orchestrator | 2025-04-13 01:49:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:49:24.931840 | orchestrator | 2025-04-13 01:49:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:49:27.988957 | orchestrator | 2025-04-13 01:49:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:49:27.989100 | orchestrator | 2025-04-13 01:49:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:49:31.039855 | orchestrator | 2025-04-13 01:49:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:49:31.039999 | orchestrator | 2025-04-13 01:49:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:49:31.040357 | orchestrator | 2025-04-13 01:49:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:49:34.087584 | orchestrator | 2025-04-13 01:49:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:49:37.141737 | orchestrator | 2025-04-13 01:49:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:49:37.141881 | orchestrator | 2025-04-13 01:49:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:49:40.192848 | orchestrator | 2025-04-13 01:49:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:49:40.193020 | orchestrator | 2025-04-13 01:49:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:49:43.248289 | orchestrator | 2025-04-13 01:49:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:49:43.248480 | orchestrator | 2025-04-13 01:49:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:49:46.306832 | orchestrator | 2025-04-13 01:49:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:49:46.306980 | orchestrator | 2025-04-13 01:49:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:49:49.355544 | orchestrator | 2025-04-13 01:49:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:49:49.355695 | orchestrator | 2025-04-13 01:49:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:49:52.405142 | orchestrator | 2025-04-13 01:49:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:49:52.405284 | orchestrator | 2025-04-13 01:49:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:49:55.446842 | orchestrator | 2025-04-13 01:49:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:49:55.447000 | orchestrator | 2025-04-13 01:49:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:49:58.494204 | orchestrator | 2025-04-13 01:49:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:49:58.494408 | orchestrator | 2025-04-13 01:49:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:50:01.538795 | orchestrator | 2025-04-13 01:49:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:50:01.538949 | orchestrator | 2025-04-13 01:50:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:50:04.586314 | orchestrator | 2025-04-13 01:50:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:50:04.586528 | orchestrator | 2025-04-13 01:50:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:50:07.637603 | orchestrator | 2025-04-13 01:50:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:50:07.637746 | orchestrator | 2025-04-13 01:50:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:50:10.681661 | orchestrator | 2025-04-13 01:50:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:50:10.681838 | orchestrator | 2025-04-13 01:50:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:50:13.732382 | orchestrator | 2025-04-13 01:50:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:50:13.732556 | orchestrator | 2025-04-13 01:50:13 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:50:16.781041 | orchestrator | 2025-04-13 01:50:13 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:50:16.781192 | orchestrator | 2025-04-13 01:50:16 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:50:19.835658 | orchestrator | 2025-04-13 01:50:16 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:50:19.835802 | orchestrator | 2025-04-13 01:50:19 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:50:22.881584 | orchestrator | 2025-04-13 01:50:19 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:50:22.881734 | orchestrator | 2025-04-13 01:50:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:50:25.934538 | orchestrator | 2025-04-13 01:50:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:50:25.934684 | orchestrator | 2025-04-13 01:50:25 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:50:28.978519 | orchestrator | 2025-04-13 01:50:25 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:50:28.978668 | orchestrator | 2025-04-13 01:50:28 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:50:32.031172 | orchestrator | 2025-04-13 01:50:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:50:32.031317 | orchestrator | 2025-04-13 01:50:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:50:35.081336 | orchestrator | 2025-04-13 01:50:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:50:35.081525 | orchestrator | 2025-04-13 01:50:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:50:38.127763 | orchestrator | 2025-04-13 01:50:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:50:38.127901 | orchestrator | 2025-04-13 01:50:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:50:41.187536 | orchestrator | 2025-04-13 01:50:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:50:41.187696 | orchestrator | 2025-04-13 01:50:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:50:44.234765 | orchestrator | 2025-04-13 01:50:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:50:44.234909 | orchestrator | 2025-04-13 01:50:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:50:44.235671 | orchestrator | 2025-04-13 01:50:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:50:47.292974 | orchestrator | 2025-04-13 01:50:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:50:50.341733 | orchestrator | 2025-04-13 01:50:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:50:50.341868 | orchestrator | 2025-04-13 01:50:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:50:53.392439 | orchestrator | 2025-04-13 01:50:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:50:53.392589 | orchestrator | 2025-04-13 01:50:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:50:56.444707 | orchestrator | 2025-04-13 01:50:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:50:56.444846 | orchestrator | 2025-04-13 01:50:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:50:56.445894 | orchestrator | 2025-04-13 01:50:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:50:59.495568 | orchestrator | 2025-04-13 01:50:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:51:02.538800 | orchestrator | 2025-04-13 01:50:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:51:02.538946 | orchestrator | 2025-04-13 01:51:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:51:05.592229 | orchestrator | 2025-04-13 01:51:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:51:05.592453 | orchestrator | 2025-04-13 01:51:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:51:08.641995 | orchestrator | 2025-04-13 01:51:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:51:08.642209 | orchestrator | 2025-04-13 01:51:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:51:11.699145 | orchestrator | 2025-04-13 01:51:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:51:11.699397 | orchestrator | 2025-04-13 01:51:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:51:14.751557 | orchestrator | 2025-04-13 01:51:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:51:14.751718 | orchestrator | 2025-04-13 01:51:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:51:17.804572 | orchestrator | 2025-04-13 01:51:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:51:17.804722 | orchestrator | 2025-04-13 01:51:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:51:20.847962 | orchestrator | 2025-04-13 01:51:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:51:20.848100 | orchestrator | 2025-04-13 01:51:20 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:51:23.898926 | orchestrator | 2025-04-13 01:51:20 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:51:23.899089 | orchestrator | 2025-04-13 01:51:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:51:26.959983 | orchestrator | 2025-04-13 01:51:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:51:26.960129 | orchestrator | 2025-04-13 01:51:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:51:30.015164 | orchestrator | 2025-04-13 01:51:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:51:30.015317 | orchestrator | 2025-04-13 01:51:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:51:33.062807 | orchestrator | 2025-04-13 01:51:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:51:33.062956 | orchestrator | 2025-04-13 01:51:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:51:36.109586 | orchestrator | 2025-04-13 01:51:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:51:36.109733 | orchestrator | 2025-04-13 01:51:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:51:39.162701 | orchestrator | 2025-04-13 01:51:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:51:39.162847 | orchestrator | 2025-04-13 01:51:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:51:42.220931 | orchestrator | 2025-04-13 01:51:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:51:42.221079 | orchestrator | 2025-04-13 01:51:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:51:45.280399 | orchestrator | 2025-04-13 01:51:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:51:45.280579 | orchestrator | 2025-04-13 01:51:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:51:48.333820 | orchestrator | 2025-04-13 01:51:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:51:48.333970 | orchestrator | 2025-04-13 01:51:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:51:51.380483 | orchestrator | 2025-04-13 01:51:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:51:51.380623 | orchestrator | 2025-04-13 01:51:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:51:54.436641 | orchestrator | 2025-04-13 01:51:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:51:54.436789 | orchestrator | 2025-04-13 01:51:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:51:57.486544 | orchestrator | 2025-04-13 01:51:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:51:57.486723 | orchestrator | 2025-04-13 01:51:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:52:00.540452 | orchestrator | 2025-04-13 01:51:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:52:00.540604 | orchestrator | 2025-04-13 01:52:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:52:03.594834 | orchestrator | 2025-04-13 01:52:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:52:03.594989 | orchestrator | 2025-04-13 01:52:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:52:06.644193 | orchestrator | 2025-04-13 01:52:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:52:06.644446 | orchestrator | 2025-04-13 01:52:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:52:09.685251 | orchestrator | 2025-04-13 01:52:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:52:09.685389 | orchestrator | 2025-04-13 01:52:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:52:12.730081 | orchestrator | 2025-04-13 01:52:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:52:12.730264 | orchestrator | 2025-04-13 01:52:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:52:15.778265 | orchestrator | 2025-04-13 01:52:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:52:15.778454 | orchestrator | 2025-04-13 01:52:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:52:15.779514 | orchestrator | 2025-04-13 01:52:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:52:18.831125 | orchestrator | 2025-04-13 01:52:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:52:21.884850 | orchestrator | 2025-04-13 01:52:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:52:21.885001 | orchestrator | 2025-04-13 01:52:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:52:24.934848 | orchestrator | 2025-04-13 01:52:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:52:24.934960 | orchestrator | 2025-04-13 01:52:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:52:27.989611 | orchestrator | 2025-04-13 01:52:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:52:27.989762 | orchestrator | 2025-04-13 01:52:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:52:31.039612 | orchestrator | 2025-04-13 01:52:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:52:31.039762 | orchestrator | 2025-04-13 01:52:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:52:34.093909 | orchestrator | 2025-04-13 01:52:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:52:34.094115 | orchestrator | 2025-04-13 01:52:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:52:37.140697 | orchestrator | 2025-04-13 01:52:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:52:37.140840 | orchestrator | 2025-04-13 01:52:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:52:40.189670 | orchestrator | 2025-04-13 01:52:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:52:40.189822 | orchestrator | 2025-04-13 01:52:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:52:43.240886 | orchestrator | 2025-04-13 01:52:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:52:43.241063 | orchestrator | 2025-04-13 01:52:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:52:46.290613 | orchestrator | 2025-04-13 01:52:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:52:46.290738 | orchestrator | 2025-04-13 01:52:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:52:49.336976 | orchestrator | 2025-04-13 01:52:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:52:49.337109 | orchestrator | 2025-04-13 01:52:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:52:52.391570 | orchestrator | 2025-04-13 01:52:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:52:52.391723 | orchestrator | 2025-04-13 01:52:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:52:55.441803 | orchestrator | 2025-04-13 01:52:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:52:55.441951 | orchestrator | 2025-04-13 01:52:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:52:58.505077 | orchestrator | 2025-04-13 01:52:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:52:58.505215 | orchestrator | 2025-04-13 01:52:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:53:01.557472 | orchestrator | 2025-04-13 01:52:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:53:01.557635 | orchestrator | 2025-04-13 01:53:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:53:04.611070 | orchestrator | 2025-04-13 01:53:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:53:04.611215 | orchestrator | 2025-04-13 01:53:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:53:07.668706 | orchestrator | 2025-04-13 01:53:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:53:07.668856 | orchestrator | 2025-04-13 01:53:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:53:10.722849 | orchestrator | 2025-04-13 01:53:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:53:10.723032 | orchestrator | 2025-04-13 01:53:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:53:13.771147 | orchestrator | 2025-04-13 01:53:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:53:13.771285 | orchestrator | 2025-04-13 01:53:13 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:53:16.825250 | orchestrator | 2025-04-13 01:53:13 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:53:16.825456 | orchestrator | 2025-04-13 01:53:16 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:53:19.874706 | orchestrator | 2025-04-13 01:53:16 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:53:19.874852 | orchestrator | 2025-04-13 01:53:19 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:53:22.927011 | orchestrator | 2025-04-13 01:53:19 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:53:22.927161 | orchestrator | 2025-04-13 01:53:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:53:25.984144 | orchestrator | 2025-04-13 01:53:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:53:25.984300 | orchestrator | 2025-04-13 01:53:25 | INFO  | Task bb87774a-31b3-4f03-8b80-d2c37d038283 is in state STARTED 2025-04-13 01:53:25.985680 | orchestrator | 2025-04-13 01:53:25 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:53:25.986116 | orchestrator | 2025-04-13 01:53:25 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:53:29.044498 | orchestrator | 2025-04-13 01:53:29 | INFO  | Task bb87774a-31b3-4f03-8b80-d2c37d038283 is in state STARTED 2025-04-13 01:53:29.044973 | orchestrator | 2025-04-13 01:53:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:53:32.104851 | orchestrator | 2025-04-13 01:53:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:53:32.105031 | orchestrator | 2025-04-13 01:53:32 | INFO  | Task bb87774a-31b3-4f03-8b80-d2c37d038283 is in state STARTED 2025-04-13 01:53:32.106756 | orchestrator | 2025-04-13 01:53:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:53:32.107163 | orchestrator | 2025-04-13 01:53:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:53:35.165286 | orchestrator | 2025-04-13 01:53:35 | INFO  | Task bb87774a-31b3-4f03-8b80-d2c37d038283 is in state SUCCESS 2025-04-13 01:53:35.167584 | orchestrator | 2025-04-13 01:53:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:53:35.167727 | orchestrator | 2025-04-13 01:53:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:53:38.220933 | orchestrator | 2025-04-13 01:53:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:53:41.271319 | orchestrator | 2025-04-13 01:53:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:53:41.271556 | orchestrator | 2025-04-13 01:53:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:53:44.335169 | orchestrator | 2025-04-13 01:53:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:53:44.335332 | orchestrator | 2025-04-13 01:53:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:53:47.373005 | orchestrator | 2025-04-13 01:53:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:53:47.373152 | orchestrator | 2025-04-13 01:53:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:53:50.425788 | orchestrator | 2025-04-13 01:53:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:53:50.425940 | orchestrator | 2025-04-13 01:53:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:53:53.476801 | orchestrator | 2025-04-13 01:53:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:53:53.476944 | orchestrator | 2025-04-13 01:53:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:53:56.527970 | orchestrator | 2025-04-13 01:53:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:53:56.528117 | orchestrator | 2025-04-13 01:53:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:53:59.585630 | orchestrator | 2025-04-13 01:53:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:53:59.585775 | orchestrator | 2025-04-13 01:53:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:54:02.633179 | orchestrator | 2025-04-13 01:53:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:54:02.633330 | orchestrator | 2025-04-13 01:54:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:54:05.682012 | orchestrator | 2025-04-13 01:54:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:54:05.682234 | orchestrator | 2025-04-13 01:54:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:54:08.736837 | orchestrator | 2025-04-13 01:54:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:54:08.736982 | orchestrator | 2025-04-13 01:54:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:54:11.784201 | orchestrator | 2025-04-13 01:54:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:54:11.784348 | orchestrator | 2025-04-13 01:54:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:54:14.834862 | orchestrator | 2025-04-13 01:54:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:54:14.835012 | orchestrator | 2025-04-13 01:54:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:54:17.879239 | orchestrator | 2025-04-13 01:54:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:54:17.879449 | orchestrator | 2025-04-13 01:54:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:54:20.934224 | orchestrator | 2025-04-13 01:54:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:54:20.934415 | orchestrator | 2025-04-13 01:54:20 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:54:23.986815 | orchestrator | 2025-04-13 01:54:20 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:54:23.986965 | orchestrator | 2025-04-13 01:54:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:54:27.043169 | orchestrator | 2025-04-13 01:54:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:54:27.043317 | orchestrator | 2025-04-13 01:54:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:54:30.097918 | orchestrator | 2025-04-13 01:54:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:54:30.098114 | orchestrator | 2025-04-13 01:54:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:54:33.147116 | orchestrator | 2025-04-13 01:54:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:54:33.147327 | orchestrator | 2025-04-13 01:54:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:54:36.196762 | orchestrator | 2025-04-13 01:54:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:54:36.196908 | orchestrator | 2025-04-13 01:54:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:54:39.251970 | orchestrator | 2025-04-13 01:54:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:54:39.252118 | orchestrator | 2025-04-13 01:54:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:54:42.298193 | orchestrator | 2025-04-13 01:54:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:54:42.298347 | orchestrator | 2025-04-13 01:54:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:54:45.355747 | orchestrator | 2025-04-13 01:54:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:54:45.355896 | orchestrator | 2025-04-13 01:54:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:54:48.407979 | orchestrator | 2025-04-13 01:54:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:54:48.408127 | orchestrator | 2025-04-13 01:54:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:54:51.460823 | orchestrator | 2025-04-13 01:54:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:54:51.460964 | orchestrator | 2025-04-13 01:54:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:54:54.510592 | orchestrator | 2025-04-13 01:54:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:54:54.510746 | orchestrator | 2025-04-13 01:54:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:54:57.555953 | orchestrator | 2025-04-13 01:54:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:54:57.556112 | orchestrator | 2025-04-13 01:54:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:55:00.601900 | orchestrator | 2025-04-13 01:54:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:55:00.602131 | orchestrator | 2025-04-13 01:55:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:55:03.651476 | orchestrator | 2025-04-13 01:55:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:55:03.651629 | orchestrator | 2025-04-13 01:55:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:55:06.705306 | orchestrator | 2025-04-13 01:55:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:55:06.705511 | orchestrator | 2025-04-13 01:55:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:55:09.753351 | orchestrator | 2025-04-13 01:55:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:55:09.753542 | orchestrator | 2025-04-13 01:55:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:55:12.800856 | orchestrator | 2025-04-13 01:55:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:55:12.800993 | orchestrator | 2025-04-13 01:55:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:55:15.846380 | orchestrator | 2025-04-13 01:55:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:55:15.846558 | orchestrator | 2025-04-13 01:55:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:55:18.894336 | orchestrator | 2025-04-13 01:55:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:55:18.894501 | orchestrator | 2025-04-13 01:55:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:55:21.941297 | orchestrator | 2025-04-13 01:55:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:55:21.941537 | orchestrator | 2025-04-13 01:55:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:55:24.988307 | orchestrator | 2025-04-13 01:55:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:55:24.988524 | orchestrator | 2025-04-13 01:55:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:55:28.048937 | orchestrator | 2025-04-13 01:55:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:55:28.049084 | orchestrator | 2025-04-13 01:55:28 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:55:31.093753 | orchestrator | 2025-04-13 01:55:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:55:31.093899 | orchestrator | 2025-04-13 01:55:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:55:34.144915 | orchestrator | 2025-04-13 01:55:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:55:34.145064 | orchestrator | 2025-04-13 01:55:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:55:37.191839 | orchestrator | 2025-04-13 01:55:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:55:37.191985 | orchestrator | 2025-04-13 01:55:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:55:40.241722 | orchestrator | 2025-04-13 01:55:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:55:40.241874 | orchestrator | 2025-04-13 01:55:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:55:43.288793 | orchestrator | 2025-04-13 01:55:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:55:43.288938 | orchestrator | 2025-04-13 01:55:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:55:46.339888 | orchestrator | 2025-04-13 01:55:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:55:46.340032 | orchestrator | 2025-04-13 01:55:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:55:49.402319 | orchestrator | 2025-04-13 01:55:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:55:49.402504 | orchestrator | 2025-04-13 01:55:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:55:52.453350 | orchestrator | 2025-04-13 01:55:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:55:52.453735 | orchestrator | 2025-04-13 01:55:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:55:55.505951 | orchestrator | 2025-04-13 01:55:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:55:55.506186 | orchestrator | 2025-04-13 01:55:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:55:58.553187 | orchestrator | 2025-04-13 01:55:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:55:58.553356 | orchestrator | 2025-04-13 01:55:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:56:01.610961 | orchestrator | 2025-04-13 01:55:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:56:01.611124 | orchestrator | 2025-04-13 01:56:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:56:04.654827 | orchestrator | 2025-04-13 01:56:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:56:04.654976 | orchestrator | 2025-04-13 01:56:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:56:07.712193 | orchestrator | 2025-04-13 01:56:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:56:07.712341 | orchestrator | 2025-04-13 01:56:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:56:10.763358 | orchestrator | 2025-04-13 01:56:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:56:10.763513 | orchestrator | 2025-04-13 01:56:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:56:13.820098 | orchestrator | 2025-04-13 01:56:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:56:13.820250 | orchestrator | 2025-04-13 01:56:13 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:56:16.871189 | orchestrator | 2025-04-13 01:56:13 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:56:16.871335 | orchestrator | 2025-04-13 01:56:16 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:56:19.922508 | orchestrator | 2025-04-13 01:56:16 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:56:19.922673 | orchestrator | 2025-04-13 01:56:19 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:56:22.967927 | orchestrator | 2025-04-13 01:56:19 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:56:22.968069 | orchestrator | 2025-04-13 01:56:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:56:26.020864 | orchestrator | 2025-04-13 01:56:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:56:26.021013 | orchestrator | 2025-04-13 01:56:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:56:29.072925 | orchestrator | 2025-04-13 01:56:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:56:29.073072 | orchestrator | 2025-04-13 01:56:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:56:32.117193 | orchestrator | 2025-04-13 01:56:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:56:32.117338 | orchestrator | 2025-04-13 01:56:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:56:35.165320 | orchestrator | 2025-04-13 01:56:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:56:35.165526 | orchestrator | 2025-04-13 01:56:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:56:35.165673 | orchestrator | 2025-04-13 01:56:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:56:38.219834 | orchestrator | 2025-04-13 01:56:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:56:41.268945 | orchestrator | 2025-04-13 01:56:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:56:41.269075 | orchestrator | 2025-04-13 01:56:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:56:44.317789 | orchestrator | 2025-04-13 01:56:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:56:44.317895 | orchestrator | 2025-04-13 01:56:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:56:47.367172 | orchestrator | 2025-04-13 01:56:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:56:47.367340 | orchestrator | 2025-04-13 01:56:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:56:50.416268 | orchestrator | 2025-04-13 01:56:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:56:50.416469 | orchestrator | 2025-04-13 01:56:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:56:53.474469 | orchestrator | 2025-04-13 01:56:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:56:53.474618 | orchestrator | 2025-04-13 01:56:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:56:56.525115 | orchestrator | 2025-04-13 01:56:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:56:56.525266 | orchestrator | 2025-04-13 01:56:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:56:59.575770 | orchestrator | 2025-04-13 01:56:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:56:59.575919 | orchestrator | 2025-04-13 01:56:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:57:02.624668 | orchestrator | 2025-04-13 01:56:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:57:02.624809 | orchestrator | 2025-04-13 01:57:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:57:05.661803 | orchestrator | 2025-04-13 01:57:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:57:05.661950 | orchestrator | 2025-04-13 01:57:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:57:08.712904 | orchestrator | 2025-04-13 01:57:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:57:08.713068 | orchestrator | 2025-04-13 01:57:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:57:11.762610 | orchestrator | 2025-04-13 01:57:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:57:11.762759 | orchestrator | 2025-04-13 01:57:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:57:14.815345 | orchestrator | 2025-04-13 01:57:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:57:14.815497 | orchestrator | 2025-04-13 01:57:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:57:17.869505 | orchestrator | 2025-04-13 01:57:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:57:17.869658 | orchestrator | 2025-04-13 01:57:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:57:20.923045 | orchestrator | 2025-04-13 01:57:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:57:20.923185 | orchestrator | 2025-04-13 01:57:20 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:57:23.987058 | orchestrator | 2025-04-13 01:57:20 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:57:23.987209 | orchestrator | 2025-04-13 01:57:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:57:27.037572 | orchestrator | 2025-04-13 01:57:23 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:57:27.037719 | orchestrator | 2025-04-13 01:57:27 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:57:30.087629 | orchestrator | 2025-04-13 01:57:27 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:57:30.087776 | orchestrator | 2025-04-13 01:57:30 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:57:33.130419 | orchestrator | 2025-04-13 01:57:30 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:57:33.130621 | orchestrator | 2025-04-13 01:57:33 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:57:36.179835 | orchestrator | 2025-04-13 01:57:33 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:57:36.179988 | orchestrator | 2025-04-13 01:57:36 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:57:39.229881 | orchestrator | 2025-04-13 01:57:36 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:57:39.230154 | orchestrator | 2025-04-13 01:57:39 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:57:42.284951 | orchestrator | 2025-04-13 01:57:39 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:57:42.285057 | orchestrator | 2025-04-13 01:57:42 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:57:45.339148 | orchestrator | 2025-04-13 01:57:42 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:57:45.339296 | orchestrator | 2025-04-13 01:57:45 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:57:48.386518 | orchestrator | 2025-04-13 01:57:45 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:57:48.386674 | orchestrator | 2025-04-13 01:57:48 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:57:51.429880 | orchestrator | 2025-04-13 01:57:48 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:57:51.430081 | orchestrator | 2025-04-13 01:57:51 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:57:54.475293 | orchestrator | 2025-04-13 01:57:51 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:57:54.475389 | orchestrator | 2025-04-13 01:57:54 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:57:57.526441 | orchestrator | 2025-04-13 01:57:54 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:57:57.526615 | orchestrator | 2025-04-13 01:57:57 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:58:00.581253 | orchestrator | 2025-04-13 01:57:57 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:58:00.581398 | orchestrator | 2025-04-13 01:58:00 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:58:03.631316 | orchestrator | 2025-04-13 01:58:00 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:58:03.631523 | orchestrator | 2025-04-13 01:58:03 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:58:06.688002 | orchestrator | 2025-04-13 01:58:03 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:58:06.688134 | orchestrator | 2025-04-13 01:58:06 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:58:09.741192 | orchestrator | 2025-04-13 01:58:06 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:58:09.741336 | orchestrator | 2025-04-13 01:58:09 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:58:12.787853 | orchestrator | 2025-04-13 01:58:09 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:58:12.787971 | orchestrator | 2025-04-13 01:58:12 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:58:15.836808 | orchestrator | 2025-04-13 01:58:12 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:58:15.836975 | orchestrator | 2025-04-13 01:58:15 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:58:18.884526 | orchestrator | 2025-04-13 01:58:15 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:58:18.884685 | orchestrator | 2025-04-13 01:58:18 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:58:21.929684 | orchestrator | 2025-04-13 01:58:18 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:58:21.929843 | orchestrator | 2025-04-13 01:58:21 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:58:24.979662 | orchestrator | 2025-04-13 01:58:21 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:58:24.979805 | orchestrator | 2025-04-13 01:58:24 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:58:28.044793 | orchestrator | 2025-04-13 01:58:24 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:58:28.044946 | orchestrator | 2025-04-13 01:58:28 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:58:31.108029 | orchestrator | 2025-04-13 01:58:28 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:58:31.108180 | orchestrator | 2025-04-13 01:58:31 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:58:34.156836 | orchestrator | 2025-04-13 01:58:31 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:58:34.156981 | orchestrator | 2025-04-13 01:58:34 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:58:37.204538 | orchestrator | 2025-04-13 01:58:34 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:58:37.204689 | orchestrator | 2025-04-13 01:58:37 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:58:40.256371 | orchestrator | 2025-04-13 01:58:37 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:58:40.256591 | orchestrator | 2025-04-13 01:58:40 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:58:43.308773 | orchestrator | 2025-04-13 01:58:40 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:58:43.308922 | orchestrator | 2025-04-13 01:58:43 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:58:46.353332 | orchestrator | 2025-04-13 01:58:43 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:58:46.353453 | orchestrator | 2025-04-13 01:58:46 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:58:49.402706 | orchestrator | 2025-04-13 01:58:46 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:58:49.402854 | orchestrator | 2025-04-13 01:58:49 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:58:52.452548 | orchestrator | 2025-04-13 01:58:49 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:58:52.452707 | orchestrator | 2025-04-13 01:58:52 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:58:55.506944 | orchestrator | 2025-04-13 01:58:52 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:58:55.507115 | orchestrator | 2025-04-13 01:58:55 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:58:58.563964 | orchestrator | 2025-04-13 01:58:55 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:58:58.564093 | orchestrator | 2025-04-13 01:58:58 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:59:01.612054 | orchestrator | 2025-04-13 01:58:58 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:59:01.612202 | orchestrator | 2025-04-13 01:59:01 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:59:04.661330 | orchestrator | 2025-04-13 01:59:01 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:59:04.661565 | orchestrator | 2025-04-13 01:59:04 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:59:07.708220 | orchestrator | 2025-04-13 01:59:04 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:59:07.708369 | orchestrator | 2025-04-13 01:59:07 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:59:10.760726 | orchestrator | 2025-04-13 01:59:07 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:59:10.760873 | orchestrator | 2025-04-13 01:59:10 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:59:13.810417 | orchestrator | 2025-04-13 01:59:10 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:59:13.810609 | orchestrator | 2025-04-13 01:59:13 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:59:16.857437 | orchestrator | 2025-04-13 01:59:13 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:59:16.857600 | orchestrator | 2025-04-13 01:59:16 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:59:19.907756 | orchestrator | 2025-04-13 01:59:16 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:59:19.907904 | orchestrator | 2025-04-13 01:59:19 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:59:22.963062 | orchestrator | 2025-04-13 01:59:19 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:59:22.963209 | orchestrator | 2025-04-13 01:59:22 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:59:26.014999 | orchestrator | 2025-04-13 01:59:22 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:59:26.015170 | orchestrator | 2025-04-13 01:59:26 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:59:29.063984 | orchestrator | 2025-04-13 01:59:26 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:59:29.064166 | orchestrator | 2025-04-13 01:59:29 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:59:32.116947 | orchestrator | 2025-04-13 01:59:29 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:59:32.117090 | orchestrator | 2025-04-13 01:59:32 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:59:35.158415 | orchestrator | 2025-04-13 01:59:32 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:59:35.158669 | orchestrator | 2025-04-13 01:59:35 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:59:38.213890 | orchestrator | 2025-04-13 01:59:35 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:59:38.214137 | orchestrator | 2025-04-13 01:59:38 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:59:41.267808 | orchestrator | 2025-04-13 01:59:38 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:59:41.267993 | orchestrator | 2025-04-13 01:59:41 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:59:44.316793 | orchestrator | 2025-04-13 01:59:41 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:59:44.316932 | orchestrator | 2025-04-13 01:59:44 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:59:47.366147 | orchestrator | 2025-04-13 01:59:44 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:59:47.366256 | orchestrator | 2025-04-13 01:59:47 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:59:50.420903 | orchestrator | 2025-04-13 01:59:47 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:59:50.421075 | orchestrator | 2025-04-13 01:59:50 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:59:53.463424 | orchestrator | 2025-04-13 01:59:50 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:59:53.463601 | orchestrator | 2025-04-13 01:59:53 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:59:56.513963 | orchestrator | 2025-04-13 01:59:53 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:59:56.514162 | orchestrator | 2025-04-13 01:59:56 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 01:59:59.565995 | orchestrator | 2025-04-13 01:59:56 | INFO  | Wait 1 second(s) until the next check 2025-04-13 01:59:59.566180 | orchestrator | 2025-04-13 01:59:59 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 02:00:02.610613 | orchestrator | 2025-04-13 01:59:59 | INFO  | Wait 1 second(s) until the next check 2025-04-13 02:00:02.610764 | orchestrator | 2025-04-13 02:00:02 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 02:00:05.650869 | orchestrator | 2025-04-13 02:00:02 | INFO  | Wait 1 second(s) until the next check 2025-04-13 02:00:05.651026 | orchestrator | 2025-04-13 02:00:05 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 02:00:08.712232 | orchestrator | 2025-04-13 02:00:05 | INFO  | Wait 1 second(s) until the next check 2025-04-13 02:00:08.712413 | orchestrator | 2025-04-13 02:00:08 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 02:00:11.769860 | orchestrator | 2025-04-13 02:00:08 | INFO  | Wait 1 second(s) until the next check 2025-04-13 02:00:11.770213 | orchestrator | 2025-04-13 02:00:11 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 02:00:14.825852 | orchestrator | 2025-04-13 02:00:11 | INFO  | Wait 1 second(s) until the next check 2025-04-13 02:00:14.825986 | orchestrator | 2025-04-13 02:00:14 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 02:00:17.875641 | orchestrator | 2025-04-13 02:00:14 | INFO  | Wait 1 second(s) until the next check 2025-04-13 02:00:17.875820 | orchestrator | 2025-04-13 02:00:17 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 02:00:20.924842 | orchestrator | 2025-04-13 02:00:17 | INFO  | Wait 1 second(s) until the next check 2025-04-13 02:00:20.924996 | orchestrator | 2025-04-13 02:00:20 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 02:00:23.973971 | orchestrator | 2025-04-13 02:00:20 | INFO  | Wait 1 second(s) until the next check 2025-04-13 02:00:23.974169 | orchestrator | 2025-04-13 02:00:23 | INFO  | Task 79d052e9-7a4e-48e3-88e3-d26cb32bae23 is in state STARTED 2025-04-13 02:00:24.106167 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-04-13 02:00:24.113961 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-04-13 02:00:24.808944 | 2025-04-13 02:00:24.809101 | PLAY [Post output play] 2025-04-13 02:00:24.838343 | 2025-04-13 02:00:24.838487 | LOOP [stage-output : Register sources] 2025-04-13 02:00:24.916707 | 2025-04-13 02:00:24.916934 | TASK [stage-output : Check sudo] 2025-04-13 02:00:25.640115 | orchestrator | sudo: a password is required 2025-04-13 02:00:25.959719 | orchestrator | ok: Runtime: 0:00:00.015382 2025-04-13 02:00:25.976752 | 2025-04-13 02:00:25.976896 | LOOP [stage-output : Set source and destination for files and folders] 2025-04-13 02:00:26.012318 | 2025-04-13 02:00:26.012567 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-04-13 02:00:26.104841 | orchestrator | ok 2025-04-13 02:00:26.115599 | 2025-04-13 02:00:26.115723 | LOOP [stage-output : Ensure target folders exist] 2025-04-13 02:00:26.563680 | orchestrator | ok: "docs" 2025-04-13 02:00:26.564261 | 2025-04-13 02:00:26.802760 | orchestrator | ok: "artifacts" 2025-04-13 02:00:27.048159 | orchestrator | ok: "logs" 2025-04-13 02:00:27.072929 | 2025-04-13 02:00:27.073093 | LOOP [stage-output : Copy files and folders to staging folder] 2025-04-13 02:00:27.127874 | 2025-04-13 02:00:27.128162 | TASK [stage-output : Make all log files readable] 2025-04-13 02:00:27.430400 | orchestrator | ok 2025-04-13 02:00:27.440622 | 2025-04-13 02:00:27.440760 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-04-13 02:00:27.496692 | orchestrator | skipping: Conditional result was False 2025-04-13 02:00:27.507130 | 2025-04-13 02:00:27.507261 | TASK [stage-output : Discover log files for compression] 2025-04-13 02:00:27.533303 | orchestrator | skipping: Conditional result was False 2025-04-13 02:00:27.547976 | 2025-04-13 02:00:27.548099 | LOOP [stage-output : Archive everything from logs] 2025-04-13 02:00:27.619812 | 2025-04-13 02:00:27.619976 | PLAY [Post cleanup play] 2025-04-13 02:00:27.644005 | 2025-04-13 02:00:27.644118 | TASK [Set cloud fact (Zuul deployment)] 2025-04-13 02:00:27.712738 | orchestrator | ok 2025-04-13 02:00:27.726576 | 2025-04-13 02:00:27.726727 | TASK [Set cloud fact (local deployment)] 2025-04-13 02:00:27.763402 | orchestrator | skipping: Conditional result was False 2025-04-13 02:00:27.782911 | 2025-04-13 02:00:27.783074 | TASK [Clean the cloud environment] 2025-04-13 02:00:28.409724 | orchestrator | 2025-04-13 02:00:28 - clean up servers 2025-04-13 02:00:29.379505 | orchestrator | 2025-04-13 02:00:29 - testbed-manager 2025-04-13 02:00:29.484771 | orchestrator | 2025-04-13 02:00:29 - testbed-node-5 2025-04-13 02:00:29.597540 | orchestrator | 2025-04-13 02:00:29 - testbed-node-4 2025-04-13 02:00:29.690344 | orchestrator | 2025-04-13 02:00:29 - testbed-node-2 2025-04-13 02:00:29.785155 | orchestrator | 2025-04-13 02:00:29 - testbed-node-3 2025-04-13 02:00:29.880402 | orchestrator | 2025-04-13 02:00:29 - testbed-node-1 2025-04-13 02:00:29.975624 | orchestrator | 2025-04-13 02:00:29 - testbed-node-0 2025-04-13 02:00:30.090467 | orchestrator | 2025-04-13 02:00:30 - clean up keypairs 2025-04-13 02:00:30.110123 | orchestrator | 2025-04-13 02:00:30 - testbed 2025-04-13 02:00:30.143190 | orchestrator | 2025-04-13 02:00:30 - wait for servers to be gone 2025-04-13 02:00:41.423380 | orchestrator | 2025-04-13 02:00:41 - clean up ports 2025-04-13 02:00:41.653534 | orchestrator | 2025-04-13 02:00:41 - 14d52214-ec5e-4d92-9777-214ac32b7e9a 2025-04-13 02:00:41.877994 | orchestrator | 2025-04-13 02:00:41 - 2554a1cb-da93-4094-b75c-14e91b4dfd2b 2025-04-13 02:00:42.073559 | orchestrator | 2025-04-13 02:00:42 - 287c30fc-1bf5-4226-92d4-e8fa4d79781b 2025-04-13 02:00:42.265059 | orchestrator | 2025-04-13 02:00:42 - 54c4730d-da67-4fc5-a07c-95aa51118e99 2025-04-13 02:00:42.517081 | orchestrator | 2025-04-13 02:00:42 - 8236dfd3-75bb-4b29-b970-b86026d046f3 2025-04-13 02:00:42.698477 | orchestrator | 2025-04-13 02:00:42 - a743449f-2df2-4a13-9c3b-a6c0a649e1e9 2025-04-13 02:00:43.034578 | orchestrator | 2025-04-13 02:00:43 - df95a158-9b8c-4a31-ac55-37e93b0ed8d4 2025-04-13 02:00:43.240167 | orchestrator | 2025-04-13 02:00:43 - clean up volumes 2025-04-13 02:00:43.384730 | orchestrator | 2025-04-13 02:00:43 - testbed-volume-0-node-base 2025-04-13 02:00:43.423211 | orchestrator | 2025-04-13 02:00:43 - testbed-volume-5-node-base 2025-04-13 02:00:43.464246 | orchestrator | 2025-04-13 02:00:43 - testbed-volume-3-node-base 2025-04-13 02:00:43.504398 | orchestrator | 2025-04-13 02:00:43 - testbed-volume-4-node-base 2025-04-13 02:00:43.551456 | orchestrator | 2025-04-13 02:00:43 - testbed-volume-1-node-base 2025-04-13 02:00:43.593735 | orchestrator | 2025-04-13 02:00:43 - testbed-volume-2-node-base 2025-04-13 02:00:43.636382 | orchestrator | 2025-04-13 02:00:43 - testbed-volume-17-node-5 2025-04-13 02:00:43.677484 | orchestrator | 2025-04-13 02:00:43 - testbed-volume-2-node-2 2025-04-13 02:00:43.719777 | orchestrator | 2025-04-13 02:00:43 - testbed-volume-3-node-3 2025-04-13 02:00:43.764004 | orchestrator | 2025-04-13 02:00:43 - testbed-volume-10-node-4 2025-04-13 02:00:43.806853 | orchestrator | 2025-04-13 02:00:43 - testbed-volume-0-node-0 2025-04-13 02:00:43.853849 | orchestrator | 2025-04-13 02:00:43 - testbed-volume-14-node-2 2025-04-13 02:00:43.897398 | orchestrator | 2025-04-13 02:00:43 - testbed-volume-5-node-5 2025-04-13 02:00:43.935199 | orchestrator | 2025-04-13 02:00:43 - testbed-volume-manager-base 2025-04-13 02:00:43.975760 | orchestrator | 2025-04-13 02:00:43 - testbed-volume-13-node-1 2025-04-13 02:00:44.016644 | orchestrator | 2025-04-13 02:00:44 - testbed-volume-16-node-4 2025-04-13 02:00:44.059081 | orchestrator | 2025-04-13 02:00:44 - testbed-volume-8-node-2 2025-04-13 02:00:44.100642 | orchestrator | 2025-04-13 02:00:44 - testbed-volume-6-node-0 2025-04-13 02:00:44.142681 | orchestrator | 2025-04-13 02:00:44 - testbed-volume-4-node-4 2025-04-13 02:00:44.185361 | orchestrator | 2025-04-13 02:00:44 - testbed-volume-1-node-1 2025-04-13 02:00:44.231194 | orchestrator | 2025-04-13 02:00:44 - testbed-volume-9-node-3 2025-04-13 02:00:44.275799 | orchestrator | 2025-04-13 02:00:44 - testbed-volume-12-node-0 2025-04-13 02:00:44.320544 | orchestrator | 2025-04-13 02:00:44 - testbed-volume-7-node-1 2025-04-13 02:00:44.367434 | orchestrator | 2025-04-13 02:00:44 - testbed-volume-15-node-3 2025-04-13 02:00:44.406805 | orchestrator | 2025-04-13 02:00:44 - testbed-volume-11-node-5 2025-04-13 02:00:44.453038 | orchestrator | 2025-04-13 02:00:44 - disconnect routers 2025-04-13 02:00:44.512370 | orchestrator | 2025-04-13 02:00:44 - testbed 2025-04-13 02:00:45.159623 | orchestrator | 2025-04-13 02:00:45 - clean up subnets 2025-04-13 02:00:45.194410 | orchestrator | 2025-04-13 02:00:45 - subnet-testbed-management 2025-04-13 02:00:45.315444 | orchestrator | 2025-04-13 02:00:45 - clean up networks 2025-04-13 02:00:45.503737 | orchestrator | 2025-04-13 02:00:45 - net-testbed-management 2025-04-13 02:00:45.753983 | orchestrator | 2025-04-13 02:00:45 - clean up security groups 2025-04-13 02:00:45.794132 | orchestrator | 2025-04-13 02:00:45 - testbed-management 2025-04-13 02:00:45.878968 | orchestrator | 2025-04-13 02:00:45 - testbed-node 2025-04-13 02:00:45.968891 | orchestrator | 2025-04-13 02:00:45 - clean up floating ips 2025-04-13 02:00:45.999889 | orchestrator | 2025-04-13 02:00:45 - 81.163.192.13 2025-04-13 02:00:46.449913 | orchestrator | 2025-04-13 02:00:46 - clean up routers 2025-04-13 02:00:46.536757 | orchestrator | 2025-04-13 02:00:46 - testbed 2025-04-13 02:00:47.381398 | orchestrator | changed 2025-04-13 02:00:47.426691 | 2025-04-13 02:00:47.426796 | PLAY RECAP 2025-04-13 02:00:47.426856 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-04-13 02:00:47.426880 | 2025-04-13 02:00:47.540471 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-04-13 02:00:47.550007 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-04-13 02:00:48.244802 | 2025-04-13 02:00:48.244956 | PLAY [Base post-fetch] 2025-04-13 02:00:48.284593 | 2025-04-13 02:00:48.284736 | TASK [fetch-output : Set log path for multiple nodes] 2025-04-13 02:00:48.351627 | orchestrator | skipping: Conditional result was False 2025-04-13 02:00:48.361927 | 2025-04-13 02:00:48.362066 | TASK [fetch-output : Set log path for single node] 2025-04-13 02:00:48.415527 | orchestrator | ok 2025-04-13 02:00:48.423035 | 2025-04-13 02:00:48.423145 | LOOP [fetch-output : Ensure local output dirs] 2025-04-13 02:00:48.899636 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/6f5299302c9b4aa99b7dc55ec68fd24a/work/logs" 2025-04-13 02:00:49.182823 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/6f5299302c9b4aa99b7dc55ec68fd24a/work/artifacts" 2025-04-13 02:00:49.455462 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/6f5299302c9b4aa99b7dc55ec68fd24a/work/docs" 2025-04-13 02:00:49.483111 | 2025-04-13 02:00:49.483266 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-04-13 02:00:50.275607 | orchestrator | changed: .d..t...... ./ 2025-04-13 02:00:50.275965 | orchestrator | changed: All items complete 2025-04-13 02:00:50.276022 | 2025-04-13 02:00:50.895319 | orchestrator | changed: .d..t...... ./ 2025-04-13 02:00:51.514590 | orchestrator | changed: .d..t...... ./ 2025-04-13 02:00:51.547670 | 2025-04-13 02:00:51.547835 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-04-13 02:00:51.592521 | orchestrator | skipping: Conditional result was False 2025-04-13 02:00:51.600586 | orchestrator | skipping: Conditional result was False 2025-04-13 02:00:51.665677 | 2025-04-13 02:00:51.665819 | PLAY RECAP 2025-04-13 02:00:51.665898 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-04-13 02:00:51.665938 | 2025-04-13 02:00:51.784854 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-04-13 02:00:51.788232 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-04-13 02:00:52.467903 | 2025-04-13 02:00:52.468061 | PLAY [Base post] 2025-04-13 02:00:52.496948 | 2025-04-13 02:00:52.497085 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-04-13 02:00:53.419523 | orchestrator | changed 2025-04-13 02:00:53.456687 | 2025-04-13 02:00:53.456807 | PLAY RECAP 2025-04-13 02:00:53.456873 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-04-13 02:00:53.456935 | 2025-04-13 02:00:53.568070 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-04-13 02:00:53.576518 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-04-13 02:00:54.343980 | 2025-04-13 02:00:54.344144 | PLAY [Base post-logs] 2025-04-13 02:00:54.360484 | 2025-04-13 02:00:54.360618 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-04-13 02:00:54.821561 | localhost | changed 2025-04-13 02:00:54.827874 | 2025-04-13 02:00:54.828059 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-04-13 02:00:54.871544 | localhost | ok 2025-04-13 02:00:54.882368 | 2025-04-13 02:00:54.882566 | TASK [Set zuul-log-path fact] 2025-04-13 02:00:54.904108 | localhost | ok 2025-04-13 02:00:54.917551 | 2025-04-13 02:00:54.917667 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-04-13 02:00:54.947569 | localhost | ok 2025-04-13 02:00:54.955933 | 2025-04-13 02:00:54.956063 | TASK [upload-logs : Create log directories] 2025-04-13 02:00:55.456011 | localhost | changed 2025-04-13 02:00:55.460614 | 2025-04-13 02:00:55.460727 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-04-13 02:00:55.966754 | localhost -> localhost | ok: Runtime: 0:00:00.007344 2025-04-13 02:00:55.978982 | 2025-04-13 02:00:55.979152 | TASK [upload-logs : Upload logs to log server] 2025-04-13 02:00:56.543063 | localhost | Output suppressed because no_log was given 2025-04-13 02:00:56.548332 | 2025-04-13 02:00:56.548573 | LOOP [upload-logs : Compress console log and json output] 2025-04-13 02:00:56.621319 | localhost | skipping: Conditional result was False 2025-04-13 02:00:56.638566 | localhost | skipping: Conditional result was False 2025-04-13 02:00:56.652319 | 2025-04-13 02:00:56.652514 | LOOP [upload-logs : Upload compressed console log and json output] 2025-04-13 02:00:56.722566 | localhost | skipping: Conditional result was False 2025-04-13 02:00:56.722885 | 2025-04-13 02:00:56.735796 | localhost | skipping: Conditional result was False 2025-04-13 02:00:56.746359 | 2025-04-13 02:00:56.746536 | LOOP [upload-logs : Upload console log and json output]